CN113377549B - Queue data control method, system and queue data structure - Google Patents

Queue data control method, system and queue data structure Download PDF

Info

Publication number
CN113377549B
CN113377549B CN202110921877.4A CN202110921877A CN113377549B CN 113377549 B CN113377549 B CN 113377549B CN 202110921877 A CN202110921877 A CN 202110921877A CN 113377549 B CN113377549 B CN 113377549B
Authority
CN
China
Prior art keywords
address
queue
value
failure counter
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110921877.4A
Other languages
Chinese (zh)
Other versions
CN113377549A (en
Inventor
张宙
阮涛
左海波
梁猛
郦建新
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Qi'an Information Technology Co ltd
Original Assignee
Zhejiang Qi'an Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Qi'an Information Technology Co ltd filed Critical Zhejiang Qi'an Information Technology Co ltd
Priority to CN202110921877.4A priority Critical patent/CN113377549B/en
Publication of CN113377549A publication Critical patent/CN113377549A/en
Application granted granted Critical
Publication of CN113377549B publication Critical patent/CN113377549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • G06F9/3834Maintaining memory consistency

Abstract

The application discloses a queue data control method, which comprises the following steps: setting a circular queue, wherein the circular queue comprises at least one address, and the address is used for storing elements; setting an insertion failure counter for recording the address of the failure of the insertion element in the circular queue; setting a take-out failure counter for recording the address of the failure of taking out the element from the circular queue; setting an operation lock of an insertion failure counter and an operation lock of a take-out failure counter; when the enqueue operation is carried out, if the record of the element taking-out failure event exists in the element taking-out failure counter, the operation lock of the element taking-out failure counter is locked, the operation lock of the element taking-out failure counter is released after the record of the element taking-out failure event is deleted, and a queue tail pointer is obtained again; and when dequeuing operation is carried out, if the record of the insertion element failure event exists in the insertion failure counter, locking the operation lock of the insertion failure counter, releasing the operation lock of the insertion failure counter after deleting the record of the insertion element failure event, and acquiring the head of queue pointer again.

Description

Queue data control method, system and queue data structure
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a method and a system for controlling queue data and a queue data structure.
Background
Concurrent program refers to a program consisting of several program modules that can be executed simultaneously, which are called threads. The multiple threads constituting one program may be executed concurrently on multiple processors at the same time or may be executed alternately on one processor. The multiple threads can communicate with each other by reading and writing the shared data area or sending messages, so that the threads can cooperate with each other to complete the task. The multithreading execution mode can greatly shorten the program execution time and improve the running efficiency of the computer.
However, the most prominent problem in multi-threaded programs is the data synchronization problem. Multiple threads need to be synchronized in accessing memory variables to ensure the correctness of the logic. Competing for the same resource in multithreading is sequentially sensitive to resource access is referred to as a race condition, and the region of code that causes the race condition to occur is referred to as a critical section. Common processing methods are to add mutual exclusion locks, semaphores, etc. in critical sections. A mutex lock consists essentially of two primitive operations: locking and unlocking. Only one thread is successfully locked at any time and the following program is executed until the thread is unlocked. The remaining threads will continue to attempt access until the thread is unlocked. During this time, the remaining threads are suspended by the operating system and wait to wake up.
However, the original intention of multithreading is to make tasks execute in parallel faster, and although the lock mechanism can solve the data synchronization problem in the multithreading scenario, the parallel is converted into serial, which reduces the execution efficiency of tasks. Recognizing the problems with lock synchronization mechanisms, the industry has recently begun to explore lock-free data structures.
Atomic operation refers to an operation that is not interrupted by any other instruction or time terminal until the instruction has been operated. To ensure atomicity in a multiprocessor environment, the CPU provides a new series of instruction primitives, including: atomic read (Load), atomic Store (Store), and atomic compare-exchange (CAS). These basic atomic operations ultimately implement the basis for lock-free data structures.
Today, the way to implement lock-free queues can be roughly divided into linked list implementation and array implementation. The linked list implementation mode is that the data objects needing to be put into the queue are packaged by adopting linked list node elements, and atomic comparison and exchange (CAS) is carried out on the head and the tail of the queue, so that the message queue is updated.
Array implementations of lock-free queues typically employ circular arrays. The storage space of the queue is used as a circulation array, the position is determined through CAS operation, and space stacking of the same storage position is avoided when the storage space of the queue is used circularly through a set of barrier algorithm.
However, the above lock-free queue implementation still has some performance-affecting problems. For example, creating/recycling linked list nodes adds a large number of memory operations, which is not friendly to garbage collection in part of languages; frequent exchange of head of line and/or tail count information between producer and consumer has an impact on performance, and adjustments made to partial tail boost performance do not support non-blocking operations.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a method and an apparatus for controlling queue data, which, on the basis of a circular array queue, decouple frequent transmission of count information at the head and tail of the queue between a producer and a consumer by using a way of pre-locking resources and error rollback on the premise of simultaneously supporting blocking and non-blocking operations, thereby improving the performance of a queue system.
The application provides a queue data control method, which comprises the following steps:
establishing a plurality of circular queues, wherein each circular queue comprises a head pointer and a tail pointer which point to the operation address of the circular queue;
setting an insertion failure counter which is an array with the length being the same as that of the circular queue, wherein the insertion failure counter is used for recording an insertion element failure event in any address of the circular queue;
setting a take-out failure counter which is an array with the length being the same as that of the circular queue, wherein the take-out failure counter is used for recording a take-out element failure event of any address of the circular queue;
setting an operation lock of an insertion failure counter and an operation lock of a take-out failure counter based on a mutual exclusion lock principle;
when the cyclic queue is subjected to enqueue operation, a producer thread initiates a first data storage request, a queue tail pointer is obtained, whether an address pointed by the queue tail pointer has a record of an element taking-out failure event in the element taking-out failure counter or not is checked, if the record exists, the operation lock of the element taking-out failure counter is locked, the operation lock of the element taking-out failure counter is released after the record of the element taking-out failure event is deleted, and the queue tail pointer is obtained again;
when dequeuing the circular queue, a consumer thread initiates a first data fetching request, acquires a head of queue pointer, checks whether an address pointed by the head of queue pointer has a record of the insertion element failure event in the insertion failure counter, locks the operation lock of the insertion failure counter if the record exists, releases the operation lock of the insertion failure counter after deleting the record of the insertion element failure event, and reacquires the head of queue pointer.
Further, an element failure event inserted into any address of the circular queue is recorded through the insertion failure counter, and the address of the circular queue and the address of the insertion failure counter have a mapping relationship, wherein the circular queue comprises a first address, the first address and a first mapping address in the insertion failure counter have a mapping relationship, an element in the first mapping address is provided with an initial value, and when the first address is failed to insert into the element, the value of the element in the first mapping address is rewritten;
recording an element taking-out failure event of any address of the circular queue through the taking-out failure counter, wherein the address of the circular queue and the address of the taking-out failure counter have a mapping relation, the first address and a second mapping address in the taking-out failure counter have a mapping relation, an element in the second mapping address is provided with an initial value, and when the element taking-out at the first address fails, the value of the element in the second mapping address is rewritten.
Further, when enqueuing the circular queue, the method comprises the following steps:
acquiring a queue tail pointer, wherein the queue tail pointer points to a first address in the circular queue;
checking the value of an element in the second mapping address, and checking whether the element in the first address of the circular queue is empty when the value of the element in the second mapping address is equal to the initial value;
in response to an element in the first address of the circular queue being empty, increasing, by a CAS operation, a value of the queue tail pointer and inserting, by a CAS operation, the element in the first data storage request to the first address of the circular queue;
when increasing the value of the queue tail pointer by CAS operation fails, the queue tail pointer is acquired again;
in response to the fact that an element in a first address of the circular queue is not empty, checking whether enqueuing operation is blocked or not, and when the enqueuing operation is blocked, re-acquiring the queue tail pointer;
when the enqueue operation is non-blocking, rechecking whether an element in the first address of the circular queue is empty, in response to the element in the first address of the circular queue being empty, increasing the value of the queue tail pointer by a CAS operation, and inserting the element in the first data storage request into the first address of the circular queue by the CAS operation.
Further, the method further comprises the steps of: checking the value of an element in the second mapping address, locking the operation lock of the fetch failure counter and confirming the value of the element in the second mapping address again when the value of the element in the second mapping address is not equal to the initial value, and increasing the value of the queue tail pointer through CAS operation when the value of the element in the second mapping address is not equal to the initial value;
when the CAS operation succeeds in increasing the value of the queue tail pointer, rewriting the value of the element in the second mapping address into an initial value, releasing the operation lock of the fetch failure counter, and reacquiring the queue tail pointer;
and when the CAS operation fails to increase the value of the queue tail pointer, releasing the operation lock of the fetch failure counter and reacquiring the queue tail pointer.
Further, the method further comprises the steps of: inserting elements in a first data storage request into a first address of a circular queue through CAS operation, locking an operation lock of an insertion failure counter when the insertion fails, rewriting initial values of the elements in the first mapping address, releasing the operation lock of the insertion failure counter, and acquiring a queue tail pointer again;
wherein overwriting the initial values of the elements in the first mapped address is achieved by adding one to the values of the elements in the first mapped address.
Further, when the consumer thread initiates a first data fetch request, the method comprises the steps of:
acquiring a head-of-line pointer, wherein the head-of-line pointer points to a first address in the circular queue;
checking the value of an element in the first mapping address, and checking whether the element in the first address of the circular queue is empty when the value of the element in the first mapping address is equal to an initial value;
in response to an element in a first address of the circular queue being non-empty, increasing a value of the head of queue pointer by a CAS operation and fetching the element in the first address by the CAS operation;
when increasing the value of the head of line pointer by CAS operation fails, the head of line pointer is acquired again;
responding to the condition that an element in a first address of the circular queue is empty, checking whether dequeue operation is blocked, and when the dequeue operation is blocked, re-acquiring the head-of-queue pointer;
when the dequeue operation is not blocked, rechecking whether an element in the first address of the circular queue is empty, and in response to the element in the first address of the circular queue being not empty, increasing the value of the head of queue pointer by a CAS operation and fetching the element in the first address by the CAS operation.
Further, the method further comprises the steps of: checking the value of an element in the first mapping address, locking and inserting a failure counter operation lock and confirming the value of the element in the first mapping address again when the value of the element in the first mapping address is not equal to the initial value, and increasing the value of a head of queue pointer through CAS operation when the value of the element in the first mapping address is not equal to the initial value;
when the CAS operation succeeds in increasing the value of the head of line pointer, rewriting the value of the element in the first mapping address into an initial value, releasing the operation lock of the insertion failure counter, and reacquiring the head of line pointer;
when the CAS operation fails to increment the value of the queue tail pointer, the insertion failure counter operation lock is released and the queue head pointer is reacquired.
Further, the method further comprises the steps of: taking out the elements in the first address through CAS operation, locking a take-out failure counter operation lock when the take-out fails, rewriting initial values of the elements in the second mapping address, releasing the take-out failure counter operation lock, and acquiring the head of queue pointer again;
wherein overwriting the initial values of the elements in the second mapped address is achieved by adding one to the values of the elements in the second mapped address.
The application also provides a queue data control system, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the queue data control method provided by the application.
The application also provides a queue data structure, which comprises a circular queue, wherein the circular queue comprises a head pointer and a tail pointer pointing to the operation address of the circular queue; the first array is the same as the length of the circular queue and is used for recording an element inserting failure event in any address of the circular queue; and the second array has the same length as the circular queue and is used for recording failure events of taking out elements from any address of the circular queue.
Compared with the prior art, the technical scheme provided by the application is provided with the insertion failure counter and the extraction failure counter. The method and the device firstly use numerical value reading operation with low time consumption to judge whether ferrule conflict occurs. Under the condition that the ferrule conflict occurs, the lock is used for branch processing, so that the thread safety is ensured, and the processing efficiency of the thread is improved.
Drawings
FIG. 1 is a schematic diagram illustrating a flow of an insertion element in a queue data control method disclosed in the present application;
FIG. 2 is a flow chart illustrating a first lock branch processing procedure when an insertion element conflicts;
FIG. 3 is a flow diagram illustrating a second lock branch processing procedure when an insert element conflicts;
FIG. 4 is a schematic diagram illustrating a flow of a dequeue element in a queue data control method disclosed herein;
FIG. 5 is a flow diagram illustrating a third lock branch processing procedure when a conflict occurs in a fetch element;
FIG. 6 is a flow diagram illustrating a fourth lock branch processing when a conflict occurs in a fetch element;
fig. 7 is a schematic diagram of a queue data control system according to an embodiment of the present application.
Detailed Description
The present application is described in detail below with reference to specific embodiments shown in the drawings, but the embodiments do not limit the present application, and structural, methodological, or functional changes made by those skilled in the art according to the embodiments are included in the scope of the present application.
A queue is a linear table with an insert operation at one end and a pop operation at the other. The reading and writing of the queue follow the principle of first-in first-out, and the element which is firstly put into the queue is firstly taken out. The end that allows insertion is often referred to as the tail of the line and the end that allows removal is referred to as the head of the line. The program that inserts data is referred to as a producer thread, and the program that fetches data is referred to as a consumer thread.
The queue adopts a sequential storage structure, an array is used as a sequential storage space of the queue, subscripts of a head element and a tail element are respectively stored by two integer variables, here, the integer variable for storing the subscripts of the head element is called a head pointer, and the integer variable for storing the subscripts of the tail element is called a tail pointer.
Because the operation of the queue is performed at two ends, as the producer thread and the consumer thread continuously perform dequeue and enqueue operations on elements of the queue, the two ends of the head of the queue and the tail of the queue move backwards, the queue quickly moves to the tail end of the array, so that a free unit in the front of the array cannot be reused, a new element has no enqueue space, and the phenomenon is called false overflow. One of the common methods for solving the false overflow is to consider the storage space of the sequential queue as a circular space, and when the false overflow occurs, the newly added element is inserted into the first position, so that the space can be recycled, which is the circular queue.
The application provides a queue data control method, which adopts a mode of pre-locking resources and returning errors on the basis of a circular queue. On the premise of simultaneously supporting blocking and non-blocking operations, frequent transmission of head-of-line and tail-of-line counting information between producer threads and consumer threads is decoupled, and the performance of a queue system is improved.
The queue data control method provided by the application has a data structure comprising at least one circular queue, wherein the circular queue is used for recording data, and the content of the circular queue is updated by using CAS operation. For a circular queue, the circular queue is initialized first, the maximum capacity allowed by the circular queue is set, and the setting of the head pointer and the tail pointer is performed. The maximum capacity allowed by the circular queue is the array length of the queue. In the circular queue, the head pointer and the tail pointer usually move in a clockwise direction, when the head pointer and the tail pointer move by one bit in the clockwise direction, the values of the head pointer and the tail pointer are increased by one, and meanwhile, the addresses pointed by the head pointer and the tail pointer are explained by applying a modular operation. If the address pointed by the queue tail pointer is expressed as (queue tail pointer% queue array length), the address pointed by the queue head pointer is expressed as (queue head pointer% queue array length).
In one aspect, the setting of the head of line pointer and the tail of line pointer may be: the head of line pointer points to the position of the head of line element, and the tail of line pointer points to the position of the tail of line element. In the second aspect, the setting of the head of line pointer and the tail of line pointer may be: the head of line pointer points to the position of the head of line element, and the tail of line pointer points to the next position of the tail of line element. In a third aspect, the setting of the head of line pointer and the tail of line pointer may be: the head of line pointer points to the previous position of the head of line element, and the tail of line pointer points to the position of the tail of line element. In the fourth aspect, the setting of the head of line pointer and the tail of line pointer may be: the head of line pointer points to the previous position of the head of line element, and the tail of line pointer points to the next position of the tail of line element.
The data structure of the queue data control method provided by the invention further comprises the following steps: at least one insertion failure counter and at least one fetch failure counter. The insertion failure counter is an array with the length being the same as that of the array of the circular queue, and the insertion failure counter is used for recording an insertion element failure event in any address of the circular queue.
As an alternative implementation, the address of the insertion failure counter has a mapping relation with the address of the circular queue. For convenience of illustration, the circular queue includes a first address, and a certain address in the insertion failure counter has a mapping relationship with the first address, and is referred to as a first mapping address. When the first address in the circular queue fails to insert the element, the element in the first mapping address in the insertion failure counter is rewritten, so as to record the event that the first address fails to insert the element.
The fetch failure counter is an array with the length same as that of the array of the circular queue, and the fetch failure counter is used for recording the fetch element failure event of any address of the circular queue.
As an alternative implementation, the address of the fetch failure counter has a mapping relation with the address of the circular queue. For convenience of illustration, the circular queue includes a first address, and a certain address in the fetch failure counter has a mapping relationship with the first address, and is referred to as a second mapping address. When the first address in the circular queue fails to take out the element, the element in the second mapping address in the take-out failure counter is rewritten, so as to record the event that the first address fails to take out the data.
It is to be noted that the first address mentioned in the present application does not refer to a fixed address in the circular queue, but refers to any address in the circular queue. Similarly, the same is true for the first mapped address and the second mapped address.
As an alternative implementation manner, the element in the first mapping address is set with an initial value, and the initial value of the element in the first mapping address in the insertion failure counter may be set to 0. When the first address in the circular queue fails to insert the element, the element in the first mapping address in the insertion failure counter is overwritten. The overwriting mode can be a mode that the element is increased by one, so as to record the event that the first address is failed to insert into the element. In the embodiments provided in the present application, when the value of the element in the first mapping address is 0, it indicates that the first address inserts the element successfully. When the value of the element in the first mapping address is greater than 0, for example, the value of the element is 1, it indicates that the first address fails to insert the element.
Similarly, as an optional implementation manner, the element in the second mapping address is set with an initial value, and the initial value of the element in the second mapping address in the fetch failure counter may be set to 0. And when the first address in the circular queue fails to take out the element, rewriting the element in the second mapping address in the take-out failure counter. The overwriting mode can be a mode that the element is increased by one, so as to record the event that the element is failed to be taken out from the first address. In the embodiments provided in the present application, when the value of the element in the second mapping address is 0, it indicates that the element fetching by the first address is successful. When the value of the element in the second mapping address is greater than 0, for example, the value of the element is 1, it indicates that the first address fails to fetch the element.
When the circular queue is subjected to enqueue operation, a producer thread initiates a first data storage request, a queue tail pointer is obtained, whether an address pointed by the queue tail pointer has a record of an element taking-out failure event in a taking-out failure counter is checked, if the record exists, an operation lock of the taking-out failure counter is locked, the operation lock of the taking-out failure counter is released after the record of the element taking-out failure event is deleted, and the queue tail pointer is obtained again. As an alternative implementation, as shown in FIG. 1, when a producer thread initiates a first data storage request, the method includes, but is not limited to, the following steps:
s101, acquiring a queue tail pointer, wherein the queue tail pointer points to a first address in the circular queue.
S102, checking the value of the element in the second mapping address, and checking whether the element in the first address of the circular queue is empty or not when the value of the element in the second mapping address is equal to the initial value.
S103, responding to the condition that the element in the first address of the circular queue is empty, increasing the value of the queue tail pointer through CAS operation, and inserting the element in the first data storage request into the first address of the circular queue through CAS operation.
Wherein the queue tail pointer is reacquired when increasing the value of the queue tail pointer by the CAS operation fails.
S104, responding to the fact that the element in the first address of the circular queue is not empty, checking whether enqueuing operation is blocked or not, and when the enqueuing operation is blocked, re-acquiring the queue tail pointer.
When the enqueue operation is non-blocking, rechecking whether an element in the first address of the circular queue is empty, in response to the element in the first address of the circular queue being empty, increasing the value of the queue tail pointer by a CAS operation, and inserting the element in the first data storage request into the first address of the circular queue by the CAS operation.
As shown in fig. 2, as an optional implementation manner, the method for controlling queue data provided by the present invention further includes a first lock branch process, including the following steps:
s201, checking the value of the element in the second mapping address, locking the operation lock of the fetch failure counter and confirming the value of the element in the second mapping address again when the value of the element in the second mapping address is not equal to the initial value, and increasing the value of the queue tail pointer through CAS operation when the value of the element in the second mapping address is not equal to the initial value.
As an alternative implementation, the initial value of the element in the second mapped address is equal to 0. Therefore, the fetch fail counter operation lock is locked when the value of the element in the second mapped address is greater than 0 and the value of the element in the second mapped address is confirmed again, and the value of the queue tail pointer is increased by the CAS operation when it is confirmed that the value of the element in the second mapped address is not equal to the initial value.
S202, when the CAS operation successfully increases the value of the queue tail pointer, rewriting the value of the element in the second mapping address into an initial value, releasing the operation lock of the fetch failure counter, and reacquiring the queue tail pointer.
S203, when the CAS operation fails to increase the value of the queue tail pointer, releasing the operation lock of the fetch failure counter and reacquiring the queue tail pointer.
As an alternative implementation manner, since a manner of increasing an element in the second mapping address in the fetch failure counter by one is adopted in the present application, an event that the fetching of the element at the first address fails is recorded. Therefore, the value of the element in the second mapping address can be reduced by one to reset the value of the element in the second mapping address to the initial value, and at this time, the record of the failure event of the element in the failure counter is deleted.
Wherein the basic principle of the fetch failure counter operating lock communicates with the mutex lock. A mutex lock consists essentially of two primitive operations: locking and unlocking. Only one thread is successfully locked at any time and the following program is executed until the thread is unlocked. The remaining threads will continue to attempt access until the thread is unlocked. During this time, the remaining threads are suspended by the operating system and wait to wake up.
As shown in fig. 3, as an optional implementation manner, the queue data control method provided in the present invention further includes a second lock branch process, including the following steps:
s301, inserting the element in the first data storage request into a first address of the circular queue through CAS operation, and locking the insertion failure counter operation lock when the insertion fails.
S302, rewriting initial values of elements in the first mapping address.
S303, releasing the operation lock of the insertion failure counter and reacquiring the pointer at the tail of the queue.
As an alternative implementation, overwriting the initial value of an element in the first mapped address is implemented by adding one to the value of the element in the first mapped address.
Wherein the basic principle of inserting a failure counter operation lock communicates with the exclusive lock. A mutex lock consists essentially of two primitive operations: locking and unlocking. Only one thread is successfully locked at any time and the following program is executed until the thread is unlocked. The remaining threads will continue to attempt access until the thread is unlocked. During this time, the remaining threads are suspended by the operating system and wait to wake up.
In the present application, obtaining the queue tail pointer is the first step of the enqueue operation. Therefore, in the queue data control method provided by the present application, all the reacquiring of the tail pointer operation refers to that the first data storage request initiated by the producer thread returns to the first step of the enqueue operation again, and the execution continues according to the queue data control method provided by the present application. It is stated herein that for the operation of retrieving the queue tail pointer, the address pointed to in the circular queue by the currently obtained queue tail pointer and the address pointed to in the circular queue by the previously obtained queue tail pointer may be different addresses. This is because the position of the tail of the queue in the circular queue is constantly changing as multiple threads are concurrently executed.
When dequeuing the circular queue, a consumer thread initiates a first data fetching request, acquires a queue head pointer, checks whether an address pointed by the queue head pointer has a record of an insertion element failure event in an insertion failure counter, locks an insertion failure counter operation lock if the record exists, releases the insertion failure counter operation lock after deleting the record of the insertion element failure event, and reacquires the queue head pointer.
As an alternative implementation, as shown in FIG. 4, when a consumer thread initiates a first data fetch request, the method includes, but is not limited to, the following steps:
s401, a head of line pointer is obtained, and the head of line pointer points to a first address in the circular queue.
S402, checking the value of the element in the first mapping address, and checking whether the element in the first address of the circular queue is empty or not when the value of the element in the first mapping address is equal to the initial value.
And S403, in response to that the element in the first address of the circular queue is not empty, increasing the value of the head of queue pointer through CAS operation, and taking out the element in the first address through CAS operation.
Wherein the head of line pointer is reacquired when increasing the value of the head of line pointer by CAS operation fails.
S404, responding to the condition that the element in the first address of the circular queue is empty, checking whether the dequeue operation is blocked, and when the dequeue operation is blocked, acquiring the queue head pointer again.
When the dequeue operation is not blocked, rechecking whether an element in the first address of the circular queue is empty, and in response to the element in the first address of the circular queue being not empty, increasing the value of the head of queue pointer by a CAS operation and fetching the element in the first address by the CAS operation.
As shown in fig. 5, as an optional implementation manner, the queue data control method provided in the present invention further includes a third lock branch process, including the following steps:
s501, checking the value of an element in the first mapping address, locking the insertion failure counter operation lock and confirming the value of the element in the first mapping address again when the value of the element in the first mapping address is not equal to the initial value, and increasing the value of the head of queue pointer through CAS operation when the value of the element in the first mapping address is not equal to the initial value.
S502, when the CAS operation successfully increases the value of the head of line pointer, the value of the element in the first mapping address is rewritten to the initial value, the operation lock of the insertion failure counter is released, and the head of line pointer is obtained again.
S503, when the CAS operation fails to increase the value of the head of the queue pointer, releasing the operation lock of the insertion failure counter and reacquiring the head of the queue pointer.
As an alternative implementation, the initial value of the element in the first mapping address is equal to 0. Therefore, when the value of the element in the first mapped address is greater than 0, the lock insertion failure counter operates the lock and confirms the value of the element in the first mapped address again, and when it is confirmed that the value of the element of the first mapped address is not equal to the initial value, the value of the head of line pointer is increased by the CAS operation. As an alternative implementation manner, since the element in the first mapping address in the insertion failure counter is increased by one, the event that the first address fails to insert the element is recorded. Thus, the value of the element in the first mapped address may be reduced by one to reset the value of the element in the first mapped address to the initial value, at which point the record of the insert element failure event in the insert failure counter is deleted.
As shown in fig. 6, as an optional implementation manner, the queue data control method provided in the present invention further includes a fourth lock branch process, including the following steps:
s601, taking out the elements in the first address through CAS operation, and locking a counter operation lock of taking-out failure when the taking-out fails.
And S602, rewriting initial values of elements in the second mapping address.
S603, releasing the operation lock of the fetch failure counter and acquiring the head of line pointer again.
As an alternative implementation manner, the application rewrites the initial value of the element in the second mapping address by adding one to the value of the element in the second mapping address.
In the present application, obtaining the head of line pointer is the first step of dequeue operation. Therefore, in the queue data control method provided by the present application, all the reacquiring head-of-line pointer operations refer to that the first data fetch request initiated by the consumer thread returns to the first step of the dequeue operation again, and the execution continues according to the queue data control method provided by the present application. It is stated herein that for the operation of retrieving the head-of-line pointer, the address pointed to in the circular queue by the currently obtained head-of-line pointer may be a different address from the address pointed to in the circular queue by the previously obtained head-of-line pointer. This is because the position of the head of line in the circular queue is constantly changing as multiple threads are concurrently executed.
There is a very small probability of ferrule collisions occurring when multi-threaded processing is performed. Taking the consumer thread as an example, the loop conflict means that when two consumer threads operate a circular queue with a length of L, one of the consumer threads obtains the operation authority of the head-of-queue pointer N, and when the operation of updating the N% L position is not completed. And the other consumer thread detects that the N% L position is operable, and acquires the operation authority of the head-of-line pointer M. Wherein M > N and M% L = N% L. At this time, no matter which consumer thread performs the CAS update operation on the location N% L, another consumer thread already acquires the operation authority of the location M% L, but cannot operate the location M% L again, and thus a conflict occurs.
The circular queue may consume time to perform a data synchronization or lock operation each time a dequeue or enqueue operation is performed. The queue data control method provided by the application introduces an insertion failure counter and a take-out failure counter on the basis of a circular queue. The method and the device firstly use numerical value reading operation with low time consumption to judge the ferrule conflict with extremely low probability. Only under the condition that the ferrule conflict occurs, the lock is used for branch processing, and the processing efficiency of the thread is improved while the thread safety is ensured.
Based on the aforementioned inventive concept, as shown in fig. 7, the present application further provides a queue data control system 10. Comprising a memory 11, a processor 12 and a computer program 13 stored on the memory 11 and executable on the processor 12, the processor 12 implementing the aforementioned queue data control method when executing the computer program 13.
The application also provides a queue data structure, which comprises a circular queue, a first array and a second array. The circular queue comprises a head pointer and a tail pointer which point to the operation address of the circular queue; the first array is the same as the length of the circular queue and is used for recording element insertion failure events in any address of the circular queue; and the second array has the same length as the circular queue and is used for recording the failure event of taking out the element from any address of the circular queue.
While the foregoing disclosure shows what is considered to be the preferred embodiment of the present application, it is not intended to limit the scope of the invention, which can be determined by one of ordinary skill in the art from the following claims: rather, the invention is intended to cover alternatives, modifications, substitutions, combinations and simplifications which may be equivalent arrangements without departing from the spirit and scope of the application and the appended claims.

Claims (9)

1. A method for queue data control, comprising:
establishing a plurality of circular queues, wherein each circular queue comprises a head pointer and a tail pointer which point to the operation address of the circular queue;
setting an insertion failure counter which is an array with the length being the same as that of the circular queue, wherein the insertion failure counter is used for recording an insertion element failure event in any address of the circular queue;
setting a take-out failure counter which is an array with the length being the same as that of the circular queue, wherein the take-out failure counter is used for recording a take-out element failure event of any address of the circular queue;
setting an operation lock of an insertion failure counter and an operation lock of a take-out failure counter based on a mutual exclusion lock principle;
the circular queue comprises a first address, the first address has a mapping relation with a first mapping address in an insertion failure counter, and the first address has a mapping relation with a second mapping address in a take-out failure counter;
when the circular queue is subjected to enqueue operation, a producer thread initiates a first data storage request, a queue tail pointer is obtained, whether an element taking-out failure event record exists in the second mapping address or not is checked, if the record exists, the operation lock of the element taking-out failure counter is locked, the operation lock of the element taking-out failure counter is released after the record of the element taking-out failure event is deleted, and the queue tail pointer is obtained again;
checking whether an element in a first address of the circular queue is empty when a value of an element in the second mapped address is equal to an initial value;
responding to the condition that an element in a first address of the circular queue is empty, increasing the value of the queue tail pointer through CAS operation, inserting the element in the first data storage request into the first address of the circular queue through CAS operation, locking an operation lock of an insertion failure counter when the insertion fails, rewriting the initial value of the element in the first mapping address, releasing the operation lock of the insertion failure counter, and acquiring the queue tail pointer again;
when dequeuing the circular queue, a consumer thread initiates a first data fetching request, acquires a queue head pointer, checks whether an insertion element failure event record exists in the first mapping address, locks the insertion failure counter operation lock if the insertion element failure event record exists, releases the insertion failure counter operation lock after deleting the insertion element failure event record, and re-acquires the queue head pointer;
checking whether an element in a first address of the circular queue is empty when a value of the element in the first mapped address is equal to an initial value;
and in response to the fact that the element in the first address of the circular queue is not empty, increasing the value of the queue head pointer through CAS operation, taking out the element in the first address through CAS operation, locking a take-out failure counter operation lock when the take-out fails, rewriting the initial value of the element in the second mapping address, releasing the take-out failure counter operation lock, and acquiring the queue head pointer again.
2. The queue data control method according to claim 1, characterized in that: recording an element insertion failure event in any address of the circular queue through the insertion failure counter, wherein the address of the circular queue and the address of the insertion failure counter have a mapping relation, an element in the first mapping address is provided with an initial value, and when the first address fails to insert the element, the value of the element in the first mapping address is rewritten;
and recording a failure event of taking out an element from any address of the circular queue through the take-out failure counter, wherein the address of the circular queue and the address of the take-out failure counter have a mapping relation, an element in the second mapping address is provided with an initial value, and when the element is taken out from the first address in a failure mode, the value of the element in the second mapping address is rewritten.
3. The queue data control method according to claim 2, wherein when the enqueue operation is performed on the circular queue, the method comprises the steps of:
when increasing the value of the queue tail pointer through CAS operation fails, reacquiring the queue tail pointer;
in response to the fact that an element in a first address of the circular queue is not empty, checking whether enqueuing operation is blocked or not, and when the enqueuing operation is blocked, re-acquiring the queue tail pointer;
when the enqueue operation is non-blocking, rechecking whether an element in the first address of the circular queue is empty, in response to the element in the first address of the circular queue being empty, increasing the value of the queue tail pointer by a CAS operation, and inserting the element in the first data storage request into the first address of the circular queue by the CAS operation.
4. The method of queue data control according to claim 3, characterised in that the method further comprises the steps of:
checking the value of an element in the second mapping address, locking the operation lock of the fetch failure counter and confirming the value of the element in the second mapping address again when the value of the element in the second mapping address is not equal to the initial value, and increasing the value of the queue tail pointer through CAS operation when the value of the element in the second mapping address is not equal to the initial value;
when the CAS operation succeeds in increasing the value of the queue tail pointer, rewriting the value of the element in the second mapping address into an initial value, releasing the operation lock of the fetch failure counter, and reacquiring the queue tail pointer;
and when the CAS operation fails to increase the value of the queue tail pointer, releasing the operation lock of the fetch failure counter and reacquiring the queue tail pointer.
5. The method of queue data control according to claim 3, characterised in that the method further comprises the steps of:
overwriting initial values of elements in the first mapped address is achieved by adding one to the values of the elements in the first mapped address.
6. The method of queue data control of claim 2, wherein when the consumer thread initiates a first data fetch request, the method comprises the steps of:
when increasing the value of the head of line pointer fails through CAS operation, the head of line pointer is acquired again;
responding to the condition that an element in a first address of the circular queue is empty, checking whether dequeue operation is blocked, and when the dequeue operation is blocked, re-acquiring the head-of-queue pointer;
when the dequeue operation is not blocked, rechecking whether an element in the first address of the circular queue is empty, and in response to the element in the first address of the circular queue being not empty, increasing the value of the head of queue pointer by a CAS operation and fetching the element in the first address by the CAS operation.
7. The method of queue data control according to claim 6, characterised in that the method further comprises the steps of:
checking the value of an element in the first mapping address, locking and inserting a failure counter operation lock and confirming the value of the element in the first mapping address again when the value of the element in the first mapping address is not equal to the initial value, and increasing the value of a head of queue pointer through CAS operation when the value of the element in the first mapping address is not equal to the initial value;
when the CAS operation succeeds in increasing the value of the head of line pointer, rewriting the value of the element in the first mapping address into an initial value, releasing the operation lock of the insertion failure counter, and reacquiring the head of line pointer;
when the CAS operation fails to increment the value of the head of line pointer, the insertion failure counter operation lock is released and the head of line pointer is reacquired.
8. The method of queue data control according to claim 6, characterised in that the method further comprises the steps of:
overwriting the initial values of the elements in the second mapped address is achieved by adding one to the values of the elements in the second mapped address.
9. A queue data control system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the queue data control method of any one of claims 1 to 8 when executing the computer program.
CN202110921877.4A 2021-08-12 2021-08-12 Queue data control method, system and queue data structure Active CN113377549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921877.4A CN113377549B (en) 2021-08-12 2021-08-12 Queue data control method, system and queue data structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921877.4A CN113377549B (en) 2021-08-12 2021-08-12 Queue data control method, system and queue data structure

Publications (2)

Publication Number Publication Date
CN113377549A CN113377549A (en) 2021-09-10
CN113377549B true CN113377549B (en) 2021-12-07

Family

ID=77576863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921877.4A Active CN113377549B (en) 2021-08-12 2021-08-12 Queue data control method, system and queue data structure

Country Status (1)

Country Link
CN (1) CN113377549B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348218B (en) * 2022-10-18 2022-12-27 井芯微电子技术(天津)有限公司 Queue scheduling method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7447875B1 (en) * 2003-11-26 2008-11-04 Novell, Inc. Method and system for management of global queues utilizing a locked state
US7865638B1 (en) * 2007-08-30 2011-01-04 Nvidia Corporation System and method for fast hardware atomic queue allocation
CN103377043A (en) * 2012-04-24 2013-10-30 腾讯科技(深圳)有限公司 Message queue achieving method and system and message queue processing system
CN112506683A (en) * 2021-01-29 2021-03-16 腾讯科技(深圳)有限公司 Data processing method, related device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176896B (en) * 2021-03-19 2022-12-13 中盈优创资讯科技有限公司 Method for randomly taking out object based on single-in single-out lock-free queue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7447875B1 (en) * 2003-11-26 2008-11-04 Novell, Inc. Method and system for management of global queues utilizing a locked state
US7865638B1 (en) * 2007-08-30 2011-01-04 Nvidia Corporation System and method for fast hardware atomic queue allocation
CN103377043A (en) * 2012-04-24 2013-10-30 腾讯科技(深圳)有限公司 Message queue achieving method and system and message queue processing system
CN112506683A (en) * 2021-01-29 2021-03-16 腾讯科技(深圳)有限公司 Data processing method, related device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dolev Adas等.A Fast, Memory Efficient, Wait-Free Multi-Producers.《https://arxiv.org/pdf/2010.14189》.2020, *
面向协议栈软件高并发无锁调度算法的研究与实现;吴卿蓉;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20210415(第04期);全文 *

Also Published As

Publication number Publication date
CN113377549A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
US7779165B2 (en) Scalable method for producer and consumer elimination
US5442763A (en) System and method for preventing deadlock in multiprocessor multiple resource instructions
US6889269B2 (en) Non-blocking concurrent queues with direct node access by threads
JP6238898B2 (en) System and method for providing and managing message queues for multi-node applications in a middleware machine environment
US8473950B2 (en) Parallel nested transactions
CN110727675B (en) Method and device for processing linked list
US7533138B1 (en) Practical lock-free doubly-linked list
US20090320030A1 (en) Method for management of timeouts
US20040107227A1 (en) Method for efficient implementation of dynamic lock-free data structures with safe memory reclamation
US11132294B2 (en) Real-time replicating garbage collection
US10416925B2 (en) Distributing computing system implementing a non-speculative hardware transactional memory and a method for using same for distributed computing
JPH0642204B2 (en) How to remove elements from a queue or stack
US11714801B2 (en) State-based queue protocol
US7389291B1 (en) Implementing optimistic concurrent data structures
CN1908890A (en) Method and apparatus for processing a load-lock instruction using a scoreboard mechanism
CN113377549B (en) Queue data control method, system and queue data structure
US9110791B2 (en) Optimistic object relocation
US6976260B1 (en) Method and apparatus for serializing a message queue in a multiprocessing environment
Gidenstam et al. Cache-aware lock-free queues for multiple producers/consumers and weak memory consistency
Luchangco et al. On the uncontended complexity of consensus
US9009730B2 (en) Transaction capable queuing
Nikolaev et al. Wcq: A fast wait-free queue with bounded memory usage
US20230252081A1 (en) Scalable range locks
US20080034169A1 (en) Pseudo-FIFO memory configuration using dual FIFO memory stacks operated by atomic instructions
Zuepke et al. Deterministic futexes revisited

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant