CN116149573B - Method, system, equipment and medium for processing queue by RAID card cluster - Google Patents

Method, system, equipment and medium for processing queue by RAID card cluster Download PDF

Info

Publication number
CN116149573B
CN116149573B CN202310419988.4A CN202310419988A CN116149573B CN 116149573 B CN116149573 B CN 116149573B CN 202310419988 A CN202310419988 A CN 202310419988A CN 116149573 B CN116149573 B CN 116149573B
Authority
CN
China
Prior art keywords
queue
original
new
queue element
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310419988.4A
Other languages
Chinese (zh)
Other versions
CN116149573A (en
Inventor
李飞龙
王见
孙明刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310419988.4A priority Critical patent/CN116149573B/en
Publication of CN116149573A publication Critical patent/CN116149573A/en
Application granted granted Critical
Publication of CN116149573B publication Critical patent/CN116149573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present invention relates to the field of storage servers. The invention provides a method, a system, equipment and a storage medium for processing a queue by a RAID card cluster, wherein the processing method comprises the following steps: reading an original head and an original global atomic variable of the annular queue; determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue; constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable; responding to the returned value of the atomic operation as the original global atomic variable, and filling the corresponding position in the annular queue with the queue element needing to be added into the annular queue; in response to there being a queue element that needs to leave the circular queue, a corresponding number of queue elements are deleted starting from the head of the queue. The invention improves the processing performance of the annular queue.

Description

Method, system, equipment and medium for processing queue by RAID card cluster
Technical Field
The present invention relates to the field of storage servers, and in particular, to a method, a system, an apparatus, and a storage medium for processing a queue by a RAID card cluster.
Background
With the continuous development of chip technology in recent years, hard RAID (Redundant Array of Independent Disks ) storage technology has developed, and the most important component unit in the hard RAID storage technology is a RAID card, and the RAID card gives some algorithms, data management and some logic functions in the soft RAID storage system to hardware management and implementation, so as to achieve the purpose of improving the data security and storage I/O performance of the storage system. At present, the industry upgrades the RAID card of a single CPU core into a multi-core RAID card, and an acceleration cluster is formed by a plurality of CPU cores, wherein each CPU core is an acceleration unit.
When there are multiple CPU cores in the queue to which to cast the block drop operation task (i.e., when there are multiple producers in the cache queue), the multiple producers need to use a mechanism to avoid conflicts between the multiple enqueue operations. The technical scheme adopted in the current industry is as follows: multiple CPU core producers use a mutual exclusion mechanism and serialize enqueuing actions, ensuring that only one producer is doing enqueuing at the same time, but in this way the concurrency performance is necessarily reduced due to the existence of a mutual exclusion lock. The CPU core currently holding the mutex lock is responsible for performing enqueuing work for QEs handed over by itself and other CPU cores. The details of such a technical implementation require that multiple CPU cores commonly maintain enqueue mutually exclusive locks EQlock, and a lock-free single-chain table wait queue Qwait that is shared by all CPU core producers. Thus, when performing enqueuing and dequeuing operations, if other CPU cores hold the mutual exclusion lock, the current CPU core must wait for the CPU holding the mutual exclusion lock to check that the lock is released before performing the enqueuing and dequeuing operations; applying a waiting queue Qwait in the RAID card causes that the memory resources which are originally tense are tense, and further the read-write I/O performance of the RAID card is reduced; additional Qwait linked list operation overhead and QE copy overhead may be incurred.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method, a system, a computer device and a computer readable storage medium for processing a queue by a RAID card cluster, which designs enqueuing and dequeuing operations of a lock-free multi-CPU core producer, adds a Round (Round) definition to a ring queue based on an original queue definition, and determines from QE 0 To QE N-1 Is called a round, the producer and consumer need to maintain a variable current_phase of the "current round" each, which variable is 1->0->1->0, adding a concept of Sequence Number (SN) to QE in the annular queue and adding a global atomic variable Tail_Req to realize that the peak value of the queue throughput rate is not limited by the exclusive lock of multiple CPU cores and the bottleneck influence of waiting queue on the performance.
Based on the above object, an aspect of the embodiments of the present invention provides a method for processing a queue by a RAID card cluster, including the following steps: reading an original head of the annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase; determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue; constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable; responding to the return value of the atomic operation as the original global atomic variable, and filling the queue elements needing to be added into the annular queue to the corresponding positions in the annular queue; and in response to there being a queue element that needs to leave the circular queue, deleting a corresponding number of queue elements from the head of the queue.
In some embodiments, the determining the queue remaining space from the original header and the original global atomic variable comprises: subtracting one difference value between an original queue element index in the original global atomic variable and the original head to obtain an intermediate result; and taking the remainder obtained by dividing the intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a queue residual space.
In some embodiments, the determining the number of queue elements of the enqueuing operation according to the size of the queue remaining space and the number of queue elements to be added to the ring queue includes: and taking the minimum value between the residual space of the queue and the number of queue elements needing to be added into the annular queue as the number of queue elements of the enqueue operation.
In some embodiments, said constructing a new global atomic variable from said original global atomic variable and the number of queue elements of said enqueue operation comprises: adding a new reference index serving as the new global atomic variable to the original reference index of the original global atomic variable; and taking the sum of the original queue element index of the original global atomic variable and the number of queue elements of the enqueuing operation as a second intermediate result, and taking the remainder obtained by dividing the second intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a new queue element index of the new global atomic variable.
In some embodiments, the treatment method further comprises: and re-checking the residual space of the queue in response to the returned value of the atomic operation not being the original global atomic variable.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the queue elements between the original queue element index and the new queue element index are filled.
In some implementations, the populating the queue element between the original queue element index to the new queue element index includes: setting the original current phase of the original global atomic variable as the current phase, and setting the serial number of each queue element to be reduced one by one.
In some embodiments, the setting the sequence number of each queue element to decrease one by one includes: the number of queue elements from the enqueuing operation is reduced by one as the initial value of the sequence number of the queue element, and the sequence number of each queue element except the first queue element is set to be one smaller than the sequence number of the previous adjacent queue element.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the last step of the set fill operation is to assign a value to the current phase of the queue element corresponding to the original queue element index.
In some embodiments, the treatment method further comprises: and performing a second atomic operation according to the tail of the queue, the original queue element index and the new queue element index to judge whether the tail register is updated successfully.
In some embodiments, performing a second atomic operation according to the tail of the queue, the original queue element index, and the new queue element index to determine whether the tail register is successfully updated includes: and responding to the second atomic operation return value as the original queue element index, indicating that the update of the tail register is successful, otherwise, the update of the tail register fails.
In some embodiments, the treatment method further comprises: and judging whether the tail register is updated successfully or not according to the first queue element indexed by the new queue element.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being non-zero and the current phase of the first queue element being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being zero and the current phase of the first queue element not being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the treatment method further comprises: in response to the tail register not having been updated, the original queue element index is set equal to a new queue element index.
In some embodiments, the treatment method further comprises: taking the sum of the serial number of the first queue element after indexing the new queue element and the index value of the original queue element as a third intermediate result, and taking the remainder obtained by dividing the third intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as the new queue element index.
In some embodiments, the setting global atomic variables including the reference index, the queue element index, and the current phase includes: each write access increases the reference index once, and the current phase alternates between ones and zeros in each current round.
In another aspect of the embodiment of the present invention, there is provided a processing system for a queue of a RAID card cluster, including: the setting module is configured to read an original head of the annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase; the determining module is configured to determine a queue residual space according to the original head and the original global atomic variable, and determine the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue; the construction module is configured to construct a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and perform atomic operation according to the original global atomic variable and the new global atomic variable; the filling module is configured to respond to the return value of the atomic operation as the original global atomic variable and fill the queue element needing to be added into the annular queue to the corresponding position in the annular queue; and a deletion module configured to delete a corresponding number of queue elements from the head of the queue in response to the presence of a queue element that needs to leave the circular queue.
In yet another aspect of the embodiment of the present invention, there is also provided a computer apparatus, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions when executed by the processor implementing the steps of the above processing method.
In yet another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above processing method.
The invention has the following beneficial technical effects:
1. the embodiment of the invention increases the definition of the round in the annular queue, thereby realizing the enqueuing and dequeuing operations of the multi-CPU core producer under the condition of no mutual exclusion lock, and fully exerting the performance of a plurality of CPU cores in an acceleration cluster;
2. the embodiment of the invention increases global atomic variables, thereby realizing that the peak value of the throughput rate of the queue is not limited by the mutual exclusion lock of multiple CPU cores and the bottleneck influence of waiting queues on the performance;
3. the operation overhead of the queue list and the copy overhead of the queue elements do not exist additionally, so that the stability of the performance of a plurality of CPU cores in the acceleration cluster is improved;
4. Under the condition of not adding hardware, the method can well improve the processing performance of the annular queue by distributing each block into the annular queue after the host read-write I/O task is segmented into blocks, so as to greatly improve the capability of the RAID card for processing the I/O and enhance the competitiveness in the RAID card market.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an embodiment of a method for processing a queue by a RAID card cluster according to the present invention;
FIG. 2 is a block diagram of the host I/O according to the present invention after being split according to the stripes;
FIG. 3 is a schematic diagram of enqueuing and dequeuing a circular queue provided by the present invention;
FIG. 4 is a schematic diagram of a concurrent enqueuing process of two CPU cores provided by the present invention;
FIG. 5 is a schematic diagram of a queue status during concurrent enqueuing according to the present invention;
FIG. 6 is a schematic diagram of an embodiment of a RAID card cluster-to-queue processing system according to the present invention;
FIG. 7 is a schematic diagram of a hardware structure of an embodiment of a computer device for a method for processing a queue of a RAID card cluster according to the present invention;
fig. 8 is a schematic diagram of an embodiment of a computer storage medium of a method for processing a queue by a RAID card cluster according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two entities with the same name but different entities or different parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention, and the following embodiments are not described one by one.
In a first aspect of the embodiment of the present invention, an embodiment of a method for processing a queue by a RAID card cluster is provided. Fig. 1 is a schematic diagram of an embodiment of a method for processing a queue by a RAID card cluster according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
S1, reading an original head of an annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase;
s2, determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue;
s3, constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable;
s4, responding to the return value of the atomic operation as the original global atomic variable, and filling the queue elements needing to be added into the annular queue to the corresponding positions in the annular queue;
s5, deleting the corresponding number of queue elements from the head of the queue in response to the existence of the queue elements needing to leave the annular queue.
RAID (Redundant Array of Independent Disks ) is a large capacity disk set composed of a plurality of independent disks. RAID is a technique for combining multiple independent hard disks (physical hard disks) in different ways to form a hard disk group (logical hard disk) to provide higher storage performance and data backup than a single hard disk. With this technique, data is cut into a number of sections, which are individually stored on individual disks. Annular queues: the array simulation annular queue is used for optimizing the array simulation queue in front, the array is fully utilized, and the array is regarded as annular (only by taking the mode).
The RAID card controller in the RAID card is a chip and consists of a series of components such as a buffer memory, an I/O processor, a disk controller, a disk connector and the like, and the RAID card is a functional board card for realizing that the disks connected with the storage server are organized into a plurality of RAID arrays according to RAID levels, specifically, the RAID card is mounted on a PCIe bus, so that the RAID card can be regarded as the peripheral of the storage server, and the RAID card is a hard RAID storage technology provided on the basis of a soft RAID storage technology.
After receiving an I/O read-write task issued by a foreground host, the RAID card segments the I/O length according to a stripe of the RAID array, segments the stripe according to each disk partition stripe (as shown in figure 2), submits each segmented block stripe data to a ring queue, and a disk controller sequentially takes each block stripe data from the ring queue for disk dropping operation. A strip: also known as strip; is a set of location-dependent strips on different partitions of the array, is a unit of organization of blocks on different partitions. And (3) blocking: also known as strip/chunk; a partition is divided into a plurality of equal-sized, address-adjacent blocks (blocks), which are called partitions. A chunk is generally considered an element of a stripe. The virtual disk maps the address of the virtual disk to the address of the member disk in units of it.
In response to there being a queue element that needs to be added to the ring queue, the original head of the ring queue and an original global atomic variable including an original reference index, an original queue element index, and an original current phase are read.
In some embodiments, the treatment method further comprises: each write access increments the reference index by one, alternating the current phase by one and zero in each current round.
The embodiment of the invention adds a global atomic variable Tail_req, which is shared by all CPU core producers of the ring queue. Comprising three parts, namely a reference index (RC: reference Counter), a QE index (QEI: QE index) and a current Phase (PB: phase bit). Where RC represents the reference count of the Tail_Req variable, 1 is added to RC every time a write operation is performed to the Tail_Req, where N is the size of the queue depth of the ring queue (i.e., the maximum number of QEs that the ring queue can accommodate). QEI denotes the location index of the QE in the ring buffer. PB represents the value of the current CPU core producer phase. The Tail_Req variable is defined to be 64 bits wide (i.e., 8 bytes).
Figure SMS_1
Assuming that there are M queue elements that need to be added to the ring queue in one CPU core, read the Head, denoted as Head old (original header). Read Tail_Req, noted Tail_Req old (original global atomic variable), containing RC old (original reference index) and QEI old (original queue element index) and PB old (original current phase).
And determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue.
In some embodiments, the determining the queue remaining space from the original header and the original global atomic variable comprises: subtracting the difference value between the original queue element index in the original global atomic variable and the original head from one to obtain an intermediateResults; and taking the remainder obtained by dividing the intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a queue residual space. Calculating QEI old -1-Head old Intermediate results were obtained and calculated (QEI old -1-Head oldmodN And obtaining the residual space of the queue, wherein N is the maximum number of queue elements which can be accommodated by the annular queue.
In some embodiments, the determining the number of queue elements of the enqueuing operation according to the size of the queue remaining space and the number of queue elements to be added to the ring queue includes: and taking the minimum value between the residual space of the queue and the number of queue elements needing to be added into the annular queue as the number of queue elements of the enqueue operation. The minimum between the remaining space and M is taken as the number of QEs for the enqueue operation, denoted EQN (Enqueue Number).
And constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable.
In some embodiments, said constructing a new global atomic variable from said original global atomic variable and the number of queue elements of said enqueue operation comprises: adding a new reference index serving as the new global atomic variable to the original reference index of the original global atomic variable; and taking the sum of the original queue element index of the original global atomic variable and the number of queue elements of the enqueuing operation as a second intermediate result, and taking the remainder obtained by dividing the second intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a new queue element index of the new global atomic variable.
Constructing a new global atomic variable Tail-Req new Wherein RC is new =(RC old +1),QEI new =(QEI old +EQN) modN N is the maximum number of queue elements that the ring queue can accommodate. Tail_req old And Tail_req new Is a variable unique to each CPU core, and tail_req is a variable shared by all CPU cores.
In some embodiments, the treatment method further comprises: and re-checking the residual space of the queue in response to the returned value of the atomic operation not being the original global atomic variable.
CAS of atomic operation&Tail_Req,Tail_Req old ,Tail_Req new ). If the return value is Tail_req old Then describe Tail_req new The Tail_req has been successfully written, and the subsequent steps are performed. If the return value is not equal to Tail_req old Indicating that there are other CPU cores that have completed CAS operations, preempted QEI old To start a segment of space of index, the remaining space of the queue is checked again at this time, and preemption is initiated again. Atomic operation CAS (Compare And Swap): generally refers to such an atomic operation: for a variable, it is first compared whether its memory value is the same as a desired value, and if so, it is assigned a new value.
And in response to the returned value of the atomic operation being the original global atomic variable, filling corresponding positions in the ring queue with queue elements which need to be added into the ring queue.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the queue elements between the original queue element index and the new queue element index are filled.
In some implementations, the populating the queue element between the original queue element index to the new queue element index includes: setting the original current phase of the original global atomic variable as the current phase, and setting the serial number of each queue element to be reduced one by one.
In some embodiments, the setting the sequence number of each queue element to decrease one by one includes: the number of queue elements from the enqueuing operation is reduced by one as the initial value of the sequence number of the queue element, and the sequence number of each queue element except the first queue element is set to be one smaller than the sequence number of the previous adjacent queue element.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the last step of the set fill operation is to assign a value to the current phase of the queue element corresponding to the original queue element index.
Filling QEI old To QEI new And QE therebetween. Phase bit value PB old (considering the round flip), the SN value of the first queue element is EQN-1, then the sequence number of each queue element is 1 less than the sequence number of the previous queue element. In particular, QEI old The phase bit assignment for the corresponding QE needs to be placed in the last step of all QE fill actions.
In response to there being a queue element that needs to leave the circular queue, a corresponding number of queue elements are deleted starting from the head of the queue.
In some embodiments, the treatment method further comprises: and performing a second atomic operation according to the tail of the queue, the original queue element index and the new queue element index to judge whether the tail register is updated successfully.
In some embodiments, performing a second atomic operation according to the tail of the queue, the original queue element index, and the new queue element index to determine whether the tail register is successfully updated includes: and responding to the second atomic operation return value as the original queue element index, indicating that the update of the tail register is successful, otherwise, the update of the tail register fails.
CAS of atomic operation&Tail,QEI old ,QEI new ). If the return value is QEI old And indicating that the Tail register is updated successfully.
In some embodiments, the treatment method further comprises: and judging whether the tail register is updated successfully or not according to the first queue element indexed by the new queue element.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being non-zero and the current phase of the first queue element being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being zero and the current phase of the first queue element not being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the treatment method further comprises: in response to the tail register not having been updated, the original queue element index is set equal to a new queue element index.
In some embodiments, the treatment method further comprises: taking the sum of the serial number of the first queue element after indexing the new queue element and the index value of the original queue element as a third intermediate result, and taking the remainder obtained by dividing the third intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as the new queue element index.
Reading QEI new The first QE thereafter, i.e. index=qei new Is a QE of (c). If index- old Or index= =0 and the phase bit of this QE is not equal to PB old Then this indicates that the QE of the next segment has been filled by another CPU core. But because the preamble QE is not filled to completion, the Tail has not been updated. At this time, QEI is updated old = QEI new ,QEI new = (QEI old + SN) modN
The Round (Round) definition is added to the circular queue. From QE 0 To QE N-1 Is referred to as a round. The producer CPU core and the consumer CPU core need to each maintain a variable current_phase of "current round". The variable is 1->0->1->The order of 0 alternates. The phase bit added with 1bit in the QE represents the round of the QE. When a producer enqueues a QE, the current_phase needs to be filled into the phase bit of the QE, and when the consumer makes a queue operation, the consumer needs to check whether the phase bit of the QE is identical with the current_phase. The concept of adding Sequence Numbers (SN) to QEs in a circular queue is used to represent the descending numbers of QEs in a burst enqueue operation, if M QEs are enqueued in a burst, The SN of the first QE is M-1 and the SN of the last QE is 0. As shown in fig. 3, the initial state is an empty queue. Current_phase is 1 for both producer and consumer. After the first burst enqueues 5 QEs, the phase bits of these 5 QEs are all 1, and SN is decremented from 4 to 0 in the order of QEs. The queue then goes through dequeuing of 3 QEs and burst enqueuing of 9 elements. The producer's phase is converted from 1 to 0, and the SN of the 9 QEs of the burst enqueue is filled with 8 to 0, while the consumer is still located at phase 1 of the previous round.
The steps of the invention are illustrated by a specific example:
fig. 4 is a schematic diagram of a concurrent enqueuing flow of two CPU cores provided by the present invention, and fig. 5 is a schematic diagram of a queue status during concurrent enqueuing provided by the present invention, as shown in fig. 4 to fig. 5, there are two CPUs (cpu_0 & cpu_1), and each CPU has two QEs to join a ring queue. The enqueuing process is as follows:
1. in the initial state, the queue is empty, head=tail=0, and tail_req=0.
2. Cpu_0 and cpu_1 use CAS operation simultaneously for qei= [0,1 in queue]Is to initiate preemption. The parameter is Tail_req old = (0, 0), i.e. RC old =0,QEI old =0;Tail_Req new = (1, 2), i.e. RC new =1,QEI new =2. Wherein CPU_1 preemption is successful, CAS return value and Tail_req old Equal. While the CAS operation return value for CPU_0 is already CPU_0 modified Tail_Req, indicating a preemption failure.
3. Cpu_1 fills QE into qei= [0,1 ].
4. CPU_0 updates its Tail_Req immediately after failure old =(1,2),Tail_Req new = (2, 4), initiate CAS operation pair qei= [2,3 in queue again]Is to initiate preemption. CAS operation return value and Tail_req old Similarly, it indicates that CPU_0 preemption was successful.
5. Cpu_0 fills QE into qei= [2,3 ].
6. After the cpu_1 completes QE filling of qei= [0,1], the ring queue is preempted by other CPU cores.
7. Cpu_0 completes qei= [2,3]After filling of (3)Tail_req of (F) old And Tail_req new The QEI value (2, 4) is a parameter, and initiates a CAS operation to the Tail in an attempt to update the Tail. The return value is 0, which indicates that the preamble QE has not completed filling, and the cpu_0 enqueue flow ends.
8. CPU_1 uses its own Tail_Req old And Tail_req new The QEI value (0, 2) is a parameter, and CAS operation is initiated for Tail. And returning a value of 0 to indicate that the Tail is updated successfully. Then the QE of qei=2 is searched and the phase bit is found to have been updated to 1. And sn=1, indicating that there are two enqueues QE. The CAS operation is then initiated again for Tail with (2, 4) as the parameter. The return value is 2, indicating that the Tail update was successful. QEs with qei=4 are searched again, and a phase bit of 0 is found, indicating that no QEs are to be enqueued subsequently. The cpu_1 enqueue flow ends.
The method for accelerating the queue processing efficiency of the cluster can be applied to not only hard RAID storage technology (RAID card) but also soft RAID storage technology. The method and the system can be applied to the storage field, and can be used as reference in the fields of cloud computing, artificial intelligence and the like.
It should be noted that, in the embodiments of the method for processing a queue by using a RAID card cluster, the steps may be intersected, replaced, added and deleted, so that the method for transforming the reasonable permutation and combination into the ring queue should also belong to the protection scope of the present invention, and the protection scope of the present invention should not be limited to the embodiments.
Based on the above object, a second aspect of the embodiments of the present invention provides a processing system for a queue of a RAID card cluster. As shown in fig. 6, the system 200 includes the following modules: the setting module is configured to read an original head of the annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase; the determining module is configured to determine a queue residual space according to the original head and the original global atomic variable, and determine the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue; the construction module is configured to construct a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and perform atomic operation according to the original global atomic variable and the new global atomic variable; the filling module is configured to respond to the return value of the atomic operation as the original global atomic variable and fill the queue element needing to be added into the annular queue to the corresponding position in the annular queue; and a deletion module configured to delete a corresponding number of queue elements from the head of the queue in response to the presence of a queue element that needs to leave the circular queue.
In some embodiments, the determination module is configured to: subtracting one difference value between an original queue element index in the original global atomic variable and the original head to obtain an intermediate result; and taking the remainder obtained by dividing the intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a queue residual space.
In some embodiments, the determination module is configured to: and taking the minimum value between the residual space of the queue and the number of queue elements needing to be added into the annular queue as the number of queue elements of the enqueue operation.
In some embodiments, the construction module is configured to: adding a new reference index serving as the new global atomic variable to the original reference index of the original global atomic variable; and taking the sum of the original queue element index of the original global atomic variable and the number of queue elements of the enqueuing operation as a second intermediate result, and taking the remainder obtained by dividing the second intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a new queue element index of the new global atomic variable.
In some embodiments, the system further comprises a return module configured to: and re-checking the residual space of the queue in response to the returned value of the atomic operation not being the original global atomic variable.
In some embodiments, the population module is configured to: the queue elements between the original queue element index and the new queue element index are filled.
In some embodiments, the population module is configured to: setting the original current phase of the original global atomic variable as the current phase, and setting the serial number of each queue element to be reduced one by one.
In some embodiments, the population module is configured to: the number of queue elements from the enqueuing operation is reduced by one as the initial value of the sequence number of the queue element, and the sequence number of each queue element except the first queue element is set to be one smaller than the sequence number of the previous adjacent queue element.
In some embodiments, the population module is configured to: the last step of the set fill operation is to assign a value to the current phase of the queue element corresponding to the original queue element index.
In some embodiments, the system further comprises a determination module configured to: and performing a second atomic operation according to the tail of the queue, the original queue element index and the new queue element index to judge whether the tail register is updated successfully.
In some embodiments, the determination module is configured to: and responding to the second atomic operation return value as the original queue element index, indicating that the update of the tail register is successful, otherwise, the update of the tail register fails.
In some embodiments, the system further comprises a second determination module configured to: and judging whether the tail register is updated successfully or not according to the first queue element indexed by the new queue element.
In some embodiments, the second determination module is configured to: and in response to the queue element index of the first queue element after the new queue element index being non-zero and the current phase of the first queue element being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the second determination module is configured to: and in response to the queue element index of the first queue element after the new queue element index being zero and the current phase of the first queue element not being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the system further comprises an indexing module configured to: in response to the tail register not having been updated, the original queue element index is set equal to a new queue element index.
In some embodiments, the system further comprises a second indexing module configured to: taking the sum of the serial number of the first queue element after indexing the new queue element and the index value of the original queue element as a third intermediate result, and taking the remainder obtained by dividing the third intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as the new queue element index.
In some embodiments, the system further comprises an update module configured to: each write access increments the reference index by one, alternating the current phase by one and zero in each current round.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, reading an original head of an annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase; s2, determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue; s3, constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable; s4, responding to the return value of the atomic operation as the original global atomic variable, and filling the queue elements needing to be added into the annular queue to the corresponding positions in the annular queue; and S5, deleting the corresponding number of queue elements from the head of the queue in response to the existence of the queue elements needing to leave the annular queue.
In some embodiments, the determining the queue remaining space from the original header and the original global atomic variable comprises: subtracting one difference value between an original queue element index in the original global atomic variable and the original head to obtain an intermediate result; and taking the remainder obtained by dividing the intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a queue residual space.
In some embodiments, the determining the number of queue elements of the enqueuing operation according to the size of the queue remaining space and the number of queue elements to be added to the ring queue includes: and taking the minimum value between the residual space of the queue and the number of queue elements needing to be added into the annular queue as the number of queue elements of the enqueue operation.
In some embodiments, said constructing a new global atomic variable from said original global atomic variable and the number of queue elements of said enqueue operation comprises: adding a new reference index serving as the new global atomic variable to the original reference index of the original global atomic variable; and taking the sum of the original queue element index of the original global atomic variable and the number of queue elements of the enqueuing operation as a second intermediate result, and taking the remainder obtained by dividing the second intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as a new queue element index of the new global atomic variable.
In some embodiments, the steps further comprise: and re-checking the residual space of the queue in response to the returned value of the atomic operation not being the original global atomic variable.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the queue elements between the original queue element index and the new queue element index are filled.
In some implementations, the populating the queue element between the original queue element index to the new queue element index includes: setting the original current phase of the original global atomic variable as the current phase, and setting the serial number of each queue element to be reduced one by one.
In some embodiments, the setting the sequence number of each queue element to decrease one by one includes: the number of queue elements from the enqueuing operation is reduced by one as the initial value of the sequence number of the queue element, and the sequence number of each queue element except the first queue element is set to be one smaller than the sequence number of the previous adjacent queue element.
In some embodiments, the filling the corresponding location in the ring queue with the queue element that needs to be added to the ring queue includes: the last step of the set fill operation is to assign a value to the current phase of the queue element corresponding to the original queue element index.
In some embodiments, the steps further comprise: and performing a second atomic operation according to the tail of the queue, the original queue element index and the new queue element index to judge whether the tail register is updated successfully.
In some embodiments, performing a second atomic operation according to the tail of the queue, the original queue element index, and the new queue element index to determine whether the tail register is successfully updated includes: and responding to the second atomic operation return value as the original queue element index, indicating that the update of the tail register is successful, otherwise, the update of the tail register fails.
In some embodiments, the steps further comprise: and judging whether the tail register is updated successfully or not according to the first queue element indexed by the new queue element.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being non-zero and the current phase of the first queue element being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the determining whether the tail register is updated successfully according to the first queue element indexed by the new queue element includes: and in response to the queue element index of the first queue element after the new queue element index being zero and the current phase of the first queue element not being equal to the original current phase, indicating that the tail register has not been updated.
In some embodiments, the steps further comprise: in response to the tail register not having been updated, the original queue element index is set equal to a new queue element index.
In some embodiments, the steps further comprise: taking the sum of the serial number of the first queue element after indexing the new queue element and the index value of the original queue element as a third intermediate result, and taking the remainder obtained by dividing the third intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as the new queue element index.
In some embodiments, the setting global atomic variables including the reference index, the queue element index, and the current phase includes: each write access increases the reference index once, and the current phase alternates between ones and zeros in each current round.
Fig. 7 is a schematic hardware structure of an embodiment of a computer device according to the method for processing a queue by a RAID card cluster provided by the present invention.
Taking the example of the apparatus shown in fig. 7, a processor 301 and a memory 302 are included in the apparatus.
The processor 301 and the memory 302 may be connected by a bus or otherwise, for example in fig. 7.
The memory 302 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules corresponding to the method for dequeuing and forwarding in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, that is, implements a method of processing queues by the RAID card cluster, by running nonvolatile software programs, instructions, and modules stored in the memory 302.
Memory 302 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of a processing method of the queue by the RAID card cluster, and the like. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 302 may optionally include memory located remotely from processor 301, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Computer instructions 303 corresponding to the method of handling queues of one or more RAID card clusters are stored in memory 302, which when executed by processor 301, perform the method of handling queues of a RAID card cluster in any of the method embodiments described above.
Any embodiment of a computer device executing the method for processing the queue by the RAID card cluster may achieve the same or similar effect as any embodiment of the method corresponding to the embodiment.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor performs a method of processing a queue by a RAID card cluster.
Fig. 8 is a schematic diagram of an embodiment of a computer storage medium of the method for processing a queue by a RAID card cluster according to the present invention. Taking a computer storage medium as shown in fig. 8 as an example, the computer-readable storage medium 401 stores a computer program 402 that, when executed by a processor, performs the above method.
Finally, it should be noted that, as will be understood by those skilled in the art, implementing all or part of the above-mentioned embodiments of the method may be implemented by a computer program to instruct related hardware, and the program of the RAID card cluster to queue processing method may be stored in a computer readable storage medium, where the program may include the steps of the embodiments of the above-mentioned methods when executed. The storage medium of the program may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (RAM), or the like. The computer program embodiments described above may achieve the same or similar effects as any of the method embodiments described above.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (20)

1. A method for processing a queue by a RAID card cluster is characterized by comprising the following steps:
reading an original head of the annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase;
determining a queue residual space according to the original head and the original global atomic variable, and determining the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue;
Constructing a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and performing atomic operation according to the original global atomic variable and the new global atomic variable;
responding to the return value of the atomic operation as the original global atomic variable, and filling the queue elements needing to be added into the annular queue to the corresponding positions in the annular queue; and
in response to there being a queue element that needs to leave the circular queue, a corresponding number of queue elements are deleted starting from the head of the queue.
2. The method of claim 1, wherein determining the remaining space of the queue based on the original header and the original global atomic variable comprises:
subtracting one difference value between an original queue element index in the original global atomic variable and the original head to obtain an intermediate result; and
and dividing the intermediate result by the maximum number of queue elements which can be accommodated by the annular queue to obtain a remainder as a queue residual space.
3. The method for processing the queue by the RAID card cluster according to claim 1, wherein determining the number of queue elements of the enqueue operation according to the size of the remaining space of the queue and the number of queue elements to be added to the ring queue comprises:
And taking the minimum value between the residual space of the queue and the number of queue elements needing to be added into the annular queue as the number of queue elements of the enqueue operation.
4. A method for handling queues by a RAID card cluster according to claim 3 wherein said constructing a new global atomic variable based on said original global atomic variable and the number of queue elements of said enqueuing operation comprises:
adding a new reference index serving as the new global atomic variable to the original reference index of the original global atomic variable; and
and taking the sum of the original queue element index of the original global atomic variable and the number of the queue elements of the enqueuing operation as a second intermediate result, and taking the remainder obtained by dividing the second intermediate result by the maximum number of the queue elements which can be accommodated by the annular queue as a new queue element index of the new global atomic variable.
5. The method for processing a queue of a RAID card cluster according to claim 1, further comprising:
and re-checking the residual space of the queue in response to the returned value of the atomic operation not being the original global atomic variable.
6. The method for processing the queue by the RAID card cluster according to claim 1, wherein said filling the queue elements required to be added to the ring queue to the corresponding positions in the ring queue comprises:
The queue elements between the original queue element index and the new queue element index are filled.
7. The method of claim 6, wherein the filling the queue elements between the original queue element index and the new queue element index comprises:
setting the original current phase of the original global atomic variable as the current phase, and setting the serial number of each queue element to be reduced one by one.
8. The method for processing the queue by the RAID card cluster according to claim 7, wherein said setting the sequence number of each queue element to decrease one by one comprises:
the number of queue elements from the enqueuing operation is reduced by one as the initial value of the sequence number of the queue element, and the sequence number of each queue element except the first queue element is set to be one smaller than the sequence number of the previous adjacent queue element.
9. The method for processing the queue by the RAID card cluster according to claim 8, wherein said filling the queue elements required to be added to the ring queue to the corresponding positions in the ring queue comprises:
the last step of the set fill operation is to assign a value to the current phase of the queue element corresponding to the original queue element index.
10. The method for processing a queue of a RAID card cluster according to claim 1, further comprising:
and performing a second atomic operation according to the tail of the queue, the original queue element index and the new queue element index to judge whether the tail register is updated successfully.
11. The method for processing a queue of a RAID card cluster according to claim 10 wherein said performing a second atomic operation according to a queue tail, an original queue element index, and a new queue element index to determine if a tail register update is successful comprises:
and responding to the second atomic operation return value as the original queue element index, indicating that the update of the tail register is successful, otherwise, the update of the tail register fails.
12. The method for processing a queue of a RAID card cluster according to claim 11, further comprising:
and judging whether the tail register is updated successfully or not according to the first queue element indexed by the new queue element.
13. The method for processing a queue of a RAID card cluster according to claim 12 wherein said determining whether the tail register is updated successfully based on the first queue element indexed by the new queue element comprises:
And in response to the queue element index of the first queue element after the new queue element index being non-zero and the current phase of the first queue element being equal to the original current phase, indicating that the tail register has not been updated.
14. The method for processing a queue of a RAID card cluster according to claim 13 wherein said determining whether the tail register is updated successfully based on the first queue element indexed by the new queue element comprises:
and in response to the queue element index of the first queue element after the new queue element index being zero and the current phase of the first queue element not being equal to the original current phase, indicating that the tail register has not been updated.
15. The method for processing a queue of a RAID card cluster according to claim 14, further comprising:
in response to the tail register not having been updated, the original queue element index is set equal to a new queue element index.
16. The method for processing a queue of a RAID card cluster according to claim 15, further comprising:
taking the sum of the serial number of the first queue element after indexing the new queue element and the index value of the original queue element as a third intermediate result, and taking the remainder obtained by dividing the third intermediate result by the maximum number of queue elements which can be accommodated by the annular queue as the new queue element index.
17. The method for processing a queue of a RAID card cluster according to claim 1, further comprising:
each write access increments the reference index by one, alternating the current phase by one and zero in each current round.
18. A system for processing a queue of a cluster of RAID cards, comprising:
the setting module is configured to read an original head of the annular queue and an original global atomic variable in response to the existence of a queue element needing to be added into the annular queue, wherein the original global atomic variable comprises an original reference index, an original queue element index and an original current phase;
the determining module is configured to determine a queue residual space according to the original head and the original global atomic variable, and determine the number of queue elements of the enqueuing operation according to the queue residual space and the number of queue elements needing to be added into the annular queue;
the construction module is configured to construct a new global atomic variable according to the original global atomic variable and the number of queue elements of the enqueuing operation, and perform atomic operation according to the original global atomic variable and the new global atomic variable;
the filling module is configured to respond to the return value of the atomic operation as the original global atomic variable and fill the queue element needing to be added into the annular queue to the corresponding position in the annular queue; and
A deletion module configured to delete a corresponding number of queue elements from the head of the queue in response to the presence of a queue element that needs to leave the circular queue.
19. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, which instructions when executed by the processor implement the steps of the processing method of any one of claims 1 to 17.
20. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the processing method according to any one of claims 1-17.
CN202310419988.4A 2023-04-19 2023-04-19 Method, system, equipment and medium for processing queue by RAID card cluster Active CN116149573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310419988.4A CN116149573B (en) 2023-04-19 2023-04-19 Method, system, equipment and medium for processing queue by RAID card cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310419988.4A CN116149573B (en) 2023-04-19 2023-04-19 Method, system, equipment and medium for processing queue by RAID card cluster

Publications (2)

Publication Number Publication Date
CN116149573A CN116149573A (en) 2023-05-23
CN116149573B true CN116149573B (en) 2023-07-14

Family

ID=86339192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310419988.4A Active CN116149573B (en) 2023-04-19 2023-04-19 Method, system, equipment and medium for processing queue by RAID card cluster

Country Status (1)

Country Link
CN (1) CN116149573B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889046A (en) * 2006-08-03 2007-01-03 迈普(四川)通信技术有限公司 Multi-kernel parallel first-in first-out queue processing system and method
CN104168217A (en) * 2014-08-15 2014-11-26 杭州华三通信技术有限公司 Scheduling method and device for first in first out queue
CN105045632A (en) * 2015-08-10 2015-11-11 京信通信技术(广州)有限公司 Method and device for implementing lock free queue in multi-core environment
CN111124641A (en) * 2019-12-12 2020-05-08 中盈优创资讯科技有限公司 Data processing method and system using multiple threads
CN112506683A (en) * 2021-01-29 2021-03-16 腾讯科技(深圳)有限公司 Data processing method, related device, equipment and storage medium
CN114385352A (en) * 2021-12-17 2022-04-22 南京中科晶上通信技术有限公司 Satellite communication system, data caching method thereof and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249356A1 (en) * 2008-03-31 2009-10-01 Xin He Lock-free circular queue in a multiprocessing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889046A (en) * 2006-08-03 2007-01-03 迈普(四川)通信技术有限公司 Multi-kernel parallel first-in first-out queue processing system and method
CN104168217A (en) * 2014-08-15 2014-11-26 杭州华三通信技术有限公司 Scheduling method and device for first in first out queue
CN105045632A (en) * 2015-08-10 2015-11-11 京信通信技术(广州)有限公司 Method and device for implementing lock free queue in multi-core environment
CN111124641A (en) * 2019-12-12 2020-05-08 中盈优创资讯科技有限公司 Data processing method and system using multiple threads
CN112506683A (en) * 2021-01-29 2021-03-16 腾讯科技(深圳)有限公司 Data processing method, related device, equipment and storage medium
CN114385352A (en) * 2021-12-17 2022-04-22 南京中科晶上通信技术有限公司 Satellite communication system, data caching method thereof and computer-readable storage medium

Also Published As

Publication number Publication date
CN116149573A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US10783038B2 (en) Distributed generation of random data in a storage system
US11860813B2 (en) High level instructions with lower-level assembly code style primitives within a memory appliance for accessing memory
CN112204513B (en) Group-based data replication in a multi-tenant storage system
US8381230B2 (en) Message passing with queues and channels
US10430336B2 (en) Lock-free raid implementation in multi-queue architecture
CN110377531B (en) Persistent memory storage engine device based on log structure and control method
KR102594657B1 (en) Method and apparatus for implementing out-of-order resource allocation
CN105339885B (en) The small efficient storage changed at random of data on disk
JP2011060278A (en) Autonomous subsystem architecture
CN111858651A (en) Data processing method and data processing device
US20160050275A1 (en) Efficient replication of changes to a byte-addressable persistent memory over a network
Tessier et al. Topology-aware data aggregation for intensive I/O on large-scale supercomputers
US11385900B2 (en) Accessing queue data
US8543722B2 (en) Message passing with queues and channels
CN116149573B (en) Method, system, equipment and medium for processing queue by RAID card cluster
CN112612855A (en) High-availability database log receiving queue, synchronization method and device
CN116225314A (en) Data writing method, device, computer equipment and storage medium
CN111858095B (en) Hardware queue multithreading sharing method, device, equipment and storage medium
CN110865901B (en) Method and device for building EC (embedded control) strip
CN115374024A (en) Memory data sorting method and related equipment
CN109478151B (en) Network accessible data volume modification
CN115712390B (en) Method and system for determining available data stripe fragmentation number
US11500815B2 (en) Dual relationship-based hash structure for non-volatile memory technology
CN117556088A (en) Data management method and device for memory multidimensional database
CN115904210A (en) Data sending method, network card and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant