CN114020658A - Multithreading linked list processing method and related device - Google Patents
Multithreading linked list processing method and related device Download PDFInfo
- Publication number
- CN114020658A CN114020658A CN202111257317.XA CN202111257317A CN114020658A CN 114020658 A CN114020658 A CN 114020658A CN 202111257317 A CN202111257317 A CN 202111257317A CN 114020658 A CN114020658 A CN 114020658A
- Authority
- CN
- China
- Prior art keywords
- linked list
- access right
- data
- address
- entered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 11
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000007717 exclusion Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000013523 data management Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1416—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
- G06F12/1425—Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being physical, e.g. cell, word, block
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/14—Protection against unauthorised use of memory or access to memory
- G06F12/1458—Protection against unauthorised use of memory or access to memory by checking the subject access rights
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a multithreading linked list processing method, which comprises the following steps: obtaining the access right of the memory address of the linked list; when the access right is obtained, locking the access right of the linked list by other buses through a bus assembly instruction; assigning the address of the IO data to be entered into the table to the linked list; and operating the IO data based on the linked list. The access right of the memory address of the linked list is obtained firstly, then the access right of the linked list is locked by other buses based on the bus assembly designation, all buses are directly locked to access the memory address, and the problem of multi-thread concurrency is avoided by adopting a mode of obtaining the lock and then locking, so that the steps of operating the linked list are reduced, the time delay of operation is reduced, and the efficiency of linked list operation is improved. The application also discloses a multithreading linked list processing device, a server and a computer readable storage medium, which have the beneficial effects.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a multithreading linked list processing method, a multithreading linked list processing apparatus, a server, and a computer-readable storage medium.
Background
In the server, the storage controller is mainly used for hard disk of the storage end and data management stored on the hard disk, and provides data access service for the server end. In the process of providing data access to the server, a large amount of unpacking, disk reading, and disk writing operations are borne by a Central Processing Unit (CPU) in the storage controller due to a large number of inputs/outputs (IO) processed in a short time.
In the related art, a data I/O is processed in a backend process through a plurality of operation stages such as a storage volume, a storage pool, a raid, a storage disk and the like, each stage processes a large amount of IO data, and the IO data are generally organized in a form of a linked list. When multithreading processes the linked list simultaneously, in order to avoid the concurrency problem, spinlock or mutex is generally added to the data structure of the linked list, and each thread contends for the processing right of the linked list by acquiring a lock, so that the lock mechanism needs to be processed in the processing process, a large amount of processing performance is wasted, and higher time delay is caused.
Therefore, how to reduce the time delay in the processing process and improve the processing efficiency is a key issue of attention of those skilled in the art.
Disclosure of Invention
The present application aims to provide a multithreading linked list processing method, a multithreading linked list processing apparatus, a server, and a computer-readable storage medium, so as to improve the efficiency of processing a linked list and reduce the time delay.
To solve the above technical problem, the present application provides a multithreading linked list processing method, including:
obtaining the access right of the memory address of the linked list;
when the access right is obtained, locking the access right of the linked list by other buses through a bus assembly instruction;
assigning the address of the IO data to be entered into the table to the linked list;
and operating the IO data based on the linked list.
Optionally, obtaining the access right of the memory address of the linked list includes:
and acquiring the access right of the memory address of the head pointer of the linked list.
Optionally, when the access right is obtained, locking the access right of the linked list by another bus through a lock bus assembly instruction includes:
and when the access right is obtained, locking the access right of other buses to the head pointer of the linked list through a lock bus assembly instruction.
Optionally, assigning an address of the IO data to be entered into the table to the linked list includes:
assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list;
obtaining the access right of the memory address of the table tail pointer of the linked list;
when the access right is obtained, locking the access right of the table tail pointer of the linked list by other buses through a lock bus assembly instruction;
and assigning the tail address of the IO data to be entered into the table to a table tail pointer of the linked list.
Optionally, assigning an address of the IO data to be entered into the table to the linked list includes:
judging whether a head pointer of the linked list is null or not;
if so, assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list;
and if not, assigning the head address of the IO data to be entered into the table to a table tail pointer of the linked list.
Optionally, the operating the IO data based on the linked list includes:
and performing read-write operation on the IO data based on the linked list.
Optionally, the method further includes:
when other threads acquire the access right of the head pointer of the linked list, taking down the old element corresponding to the head pointer of the linked list;
and pointing the head pointer of the linked list to a new element.
The present application further provides a multithreading linked list processing apparatus, including:
the access right acquisition module is used for acquiring the access right of the memory address of the linked list;
the bus locking module is used for locking the access right of the linked list by other buses through a bus assembly instruction when the access right is obtained;
the address assignment module is used for assigning the address of the IO data to be entered into the table to the linked list;
and the linked list operation module is used for operating the IO data based on the linked list.
The present application further provides a server, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the linked list processing method as described above when executing the computer program.
The present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the linked list processing method as described above.
The application provides a multithreading linked list processing method, which comprises the following steps: obtaining the access right of the memory address of the linked list; when the access right is obtained, locking the access right of the linked list by other buses through a bus assembly instruction; assigning the address of the IO data to be entered into the table to the linked list; and operating the IO data based on the linked list.
The access right of the memory address of the linked list is obtained firstly, then the access right of the linked list is locked by other buses based on the bus assembly designation, all buses are directly locked to access the memory address, and the problem of multi-thread concurrency is avoided by adopting a mode of obtaining the lock and then locking, so that the steps of operating the linked list are reduced, the time delay of operation is reduced, and the efficiency of linked list operation is improved.
The present application further provides a multithreading linked list processing apparatus, a server, and a computer-readable storage medium, which have the above beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a multithreading linked list processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a first data structure of a multithreading linked list processing method according to an embodiment of the present application;
fig. 3 is a second data structure diagram of a multithreading linked list processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a multithreading linked list processing apparatus according to an embodiment of the present disclosure.
Detailed Description
The core of the application is to provide a multithreading linked list processing method, a multithreading linked list processing device, a server and a computer readable storage medium, so as to improve the efficiency of processing the linked list and reduce the time delay.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, a data I/O is processed in a backend process through a plurality of operation stages such as a storage volume, a storage pool, a raid, a storage disk and the like, each stage processes a large amount of IO data, and the IO data are generally organized in a form of a linked list. When multiple threads process the linked list simultaneously, in order to avoid concurrency problems, spinlock or mutex is generally added in the data structure of the linked list, and each thread contends for the processing right of the linked list by acquiring a lock, so that a lock mechanism needs to be processed in the processing process, a large amount of processing performance is wasted, and higher time delay is caused.
Therefore, the present application provides a multithreading linked list processing method, which includes obtaining access rights of memory addresses of a linked list, then locking the access rights of the linked list to other buses based on the designation of bus assembly, directly locking all the buses to access the memory addresses, instead of adopting a mode of obtaining locks and then locking to avoid the problem of multithreading concurrency, reducing steps of operating the linked list, reducing time delay of operation, and improving efficiency of linked list operation.
The following describes a multithreading linked list processing method according to an embodiment.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing a linked list of multiple threads according to an embodiment of the present disclosure.
In this embodiment, the method may include:
s101, obtaining the access right of the memory address of the linked list;
it can be seen that this step is directed to obtaining access to the memory addresses of the linked list. Access to the memory address of the linked list may be contended for from multiple threads. The method for obtaining the contention may be obtaining according to the time for obtaining the access right, or according to the priority of the thread, or according to the importance degree of the thread. It is to be noted that the manner of obtaining the access right in this step is not exclusive, and is not limited herein.
Further, the step may include:
and obtaining the access right of the memory address of the head pointer of the linked list.
It can be seen that the present alternative is mainly to explain how to obtain access rights. In this alternative, the access right of the memory address of the head pointer of the linked list is obtained. That is, access to the head pointer of the linked list may be obtained to operate on the head pointer of the linked list.
S102, when the access right is obtained, locking the access right of the linked list by other buses through a bus assembly instruction;
on the basis of S101, this step is intended to lock the access right of the linked list by other buses through the lock bus assembly instruction when the access right is acquired. That is, when the thread acquires the access right of the linked list, the bus assembly instruction is executed so as to lock the access right of other buses to the linked list, so that the linked list is locked, and the linked list is prevented from being operated by other threads in the process of operating the linked list by the current thread. In addition, in this embodiment, the thread does not operate the linked list in a manner of acquiring the lock, so that the operation steps of the locking process are reduced. From two steps to one step.
The LOCK bus assembly instructions may be SMP _ LOCK and cmpxchg. Wherein SMP _ LOCK is a LOCK bus instruction. When the bus is locked, other CPUs cannot access the memory unit at the locked address. Where cmpxchg may be used with the LOCK prefix, the instruction will execute atomically at this time. To simplify the bus interface of the processor, the destination operand may receive a write cycle regardless of the result of the comparison. If the comparison fails, writing back the objective function; otherwise, the source operand is written to the target. (the processor will never generate only locked reads and not locked writes).
Further, the step may include:
when the access right is obtained, the access right of the head pointer of the linked list is locked by other buses through a lock bus assembly instruction.
It can be seen that this alternative is primarily illustrative of how the locking process may be performed. On the basis of the alternative of the previous step, the step is mainly to lock the gauge head pointer. In this alternative, when the access right is obtained, the access right of the head pointer of the linked list is locked by other buses through the lock bus assembly instruction. That is, when any bus acquires the access right, the access right of all buses to the memory address is locked through the bus assembly instruction, so that the memory address is prevented from being operated by other threads, and the memory address is locked.
S103, assigning the address of the IO data to be entered into the table to a linked list;
on the basis of S102, this step aims to assign an address of IO data to be entered into a table to a linked list. That is, the pointer of the linked list is made to point to the IO data so that the IO data can be operated on through the linked list.
Further, the step may include:
step 1, assigning a head address of IO data to be entered into a table to a table head pointer of a linked list;
step 2, obtaining the access right of the memory address of the table tail pointer of the linked list;
step 3, when the access right is obtained, locking the access right of the table tail pointer of the linked list by other buses through a bus assembly instruction;
and 4, assigning the tail address of the IO data to be entered into the table to a table tail pointer of the linked list.
It can be seen that this alternative is primarily illustrative of how assignments may be made. In the alternative scheme, a head address of IO data to be entered into the table is assigned to a head pointer of the linked list, access rights of memory addresses of tail pointers of the linked list are obtained, when the access rights are obtained, other buses are used for locking the access rights of the tail pointers of the linked list through a lock bus assembly instruction, and the tail address of the IO data to be entered into the table is assigned to the tail pointers of the linked list. That is, in this embodiment, the table head pointer is first assigned, and then the table tail pointer is assigned.
Further, the step may include:
step 1, judging whether a head pointer of a linked list is null or not;
step 2, if yes, assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list;
and 3, if not, assigning the head address of the IO data to be entered into the table to a table tail pointer of the linked list.
It can be seen that this alternative is mainly to describe how to assign values to the head pointers. In the alternative scheme, whether the head pointer of the linked list is empty is judged, if yes, the head address of the IO data to be entered into the list is assigned to the head pointer of the linked list, and if not, the head address of the IO data to be entered into the list is assigned to the tail pointer of the linked list.
And S104, operating the IO data based on the linked list.
On the basis of S103, this step aims to operate on IO data based on a linked list.
The process of operating IO data based on the linked list may refer to any one of the linked list operation modes provided in the prior art, and is not specifically limited herein.
Further, the step may include:
and reading and writing the IO data based on the linked list.
It can be seen that this alternative is primarily illustrative of how this may be done. In this alternative, the IO data may be read and written based on a linked list.
Further, this embodiment may further include:
step 1, when other threads acquire the access right of a head pointer of a linked list, taking down an old element corresponding to the head pointer of the linked list;
and step 2, pointing the head pointer of the linked list to the new element.
It can be seen that the present alternative is primarily illustrative of how multi-threaded operation can be implemented. In this alternative, when the other threads acquire the access right of the head pointer of the linked list, the old element corresponding to the head pointer of the linked list is taken down, and the head pointer of the linked list points to the new element.
In summary, in the embodiment, the access right of the memory address of the linked list is first obtained, and then the access right of the linked list is locked by other buses based on the bus assembly designation, so that all the buses are directly locked to access the memory address, instead of adopting a mode of obtaining a lock and then locking, the problem of multi-thread concurrence is avoided, steps for operating the linked list are reduced, the time delay for operating the linked list is reduced, and the efficiency for operating the linked list is improved.
The following further describes a multithreading linked list processing method provided by the present application with a specific embodiment.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic diagram of a first data structure of a multithreading linked list processing method according to an embodiment of the present disclosure, and fig. 3 is a schematic diagram of a second data structure of the multithreading linked list processing method according to the embodiment of the present disclosure.
In this embodiment, the method may include:
first, a definition data structure io _ data and such a linked list structure list _ new are shown in fig. 2 and 3 as follows.
It can be seen that, compared with the conventional multithreaded shared linked list, the list _ new linked list defined in this embodiment has only two member variables, namely, an iodatahead pointer (iodatahead) and a footer pointer (iodatatail), and does not have a synchronization lock or a mutual exclusion lock variable that is separately set to avoid concurrent processing.
Therefore, when the new io _ data performs table entry operation, namely when a plurality of threads need to mount the new io _ data on the linked list, no separate mutual exclusion lock is needed to ensure the consistency of the tail data in the multithreading processing process, but a section of assembly language is embedded to complete multithreading concurrent consistency access of the data through atomic mutual exclusion operation on the tail variable in the list _ new linked list.
The atomic mutual exclusion access operation mainly adopts two instruction level operations of SMP _ LOCK and cmpxchg to LOCK the access to the list _ new linked list, so that exclusive right is enjoyed and the problem of multithreading concurrency is avoided. Furthermore, the two instructions firstly obtain the access right to the tail memory address, and lock the bus after obtaining the access right, that is, other bus accesses aiming at the memory address are locked, so as to modify the value in the memory address, release the bus after finishing the modification, and obtain the access right through competition again by other access requirements. Secondly, the table entry operation of the new io _ data new is completed while the exclusive right is obtained, time is saved, and efficiency is improved.
The table entry operation of the new io _ data may include:
and step 1, judging whether the head pointer is null or not by means of the list _ new _ head _ add function, and pointing the head pointer to the new io _ data if the head pointer is null. That is, the same atomic operation is adopted, the exclusive access right of the head is firstly obtained, whether null exists or not is judged, and if null exists, the address of the io _ data is assigned to the head.
And 2, adding the new io _ data into the linked list, and operating by virtue of a list _ new _ add function as above, wherein the exclusive access right of tail is obtained firstly.
And 3, multi-thread extraction of the io _ data on the linked list, and extraction of the io _ data on the linked list for subsequent processing while obtaining the head exclusive access right.
Specifically, the processing procedure of the linked list when the multithreading is performed may include:
step 1, the threads of the operation linked list are respectively bound with different cpu cores, the hardware is required to have enough cpu cores for the threads to use, and two pointer variables in the linked list are initialized to be null pointers.
And 2, when a plurality of threads add new io _ data to the linked list queue, emptying a next pointer in the io _ data, calling a list _ new _ add function, and acquiring the access right of the head/tail variable by means of an SMP _ LOCK and a cmpxchg instruction to complete addition.
And 3, when the plurality of threads extract the io _ data from the linked list for further processing, calling a list _ new _ add function to preempt the independent access right of the head, but extracting the element 1 pointed by the head from the linked list, and then changing the head to point to the element 2.
It can be seen that in this embodiment, the access right of the memory address of the chain table is first obtained, and then the access right of the chain table is locked by other buses based on the designation of the bus assembly lock, so that all the buses are directly locked to access the memory address, instead of adopting the mode of obtaining the lock and then locking, the problem of multi-thread concurrence is avoided, the steps of operating the chain table are reduced, the time delay of operating is reduced, and the efficiency of operating the chain table is improved.
In the following, the multithreading linked list processing apparatus provided in the embodiment of the present application is introduced, and the multithreading linked list processing apparatus described below and the multithreading linked list processing method described above may be referred to in a corresponding manner.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a multithreading linked list processing apparatus according to an embodiment of the present disclosure.
In this embodiment, the apparatus may include:
an access right obtaining module 100, configured to obtain an access right of a memory address of a linked list;
the bus locking module 200 is used for locking the access right of the linked list by other buses through a bus assembly instruction when the access right is obtained;
an address assignment module 300, configured to assign an address of IO data to be entered into a table to a linked list;
and a linked list operation module 400, configured to operate on IO data based on a linked list.
Optionally, the access right obtaining module 100 is specifically configured to obtain an access right of a memory address of a head pointer of a linked list.
Optionally, the bus locking module 200 is specifically configured to lock the access right of the table head pointer of the linked list to another bus through a bus assembly locking instruction when the access right is obtained.
Optionally, the address assignment module 300 is specifically configured to assign a head address of IO data to be entered into a table to a table head pointer of a linked list; obtaining the access right of the memory address of the table tail pointer of the linked list; when the access right is obtained, locking the access right of the table tail pointer of the linked list by other buses through a bus assembly instruction; and assigning the tail address of the IO data to be entered into the table to a table tail pointer of the linked list.
Optionally, the address assignment module 300 is specifically configured to determine whether a head pointer of the linked list is empty; if so, assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list; and if not, assigning the head address of the IO data to be entered into the table to a table tail pointer of the linked list.
Optionally, the linked list operation module 400 is specifically configured to perform read-write operation on IO data based on a linked list.
Optionally, the apparatus may further include:
the multithreading operation module is used for taking down the old element corresponding to the head pointer of the linked list when other threads acquire the access right of the head pointer of the linked list; the head pointer of the linked list is pointed to the new element.
An embodiment of the present application further provides a server, including:
a memory for storing a computer program;
a processor for implementing the steps of the linked list processing method as described in the above embodiments when executing the computer program.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the linked list processing method according to the above embodiment are implemented.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The multithreading linked list processing method, the multithreading linked list processing device, the server and the computer readable storage medium provided by the application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
Claims (10)
1. A method for processing a multithreaded linked list, comprising:
obtaining the access right of the memory address of the linked list;
when the access right is obtained, locking the access right of the linked list by other buses through a bus assembly instruction;
assigning the address of the IO data to be entered into the table to the linked list;
and operating the IO data based on the linked list.
2. The linked list processing method of claim 1, wherein obtaining access rights to memory addresses of the linked list comprises:
and acquiring the access right of the memory address of the head pointer of the linked list.
3. The method as claimed in claim 2, wherein when the access right is obtained, locking the access right of the linked list by other buses through a lock bus assembly instruction comprises:
and when the access right is obtained, locking the access right of other buses to the head pointer of the linked list through a lock bus assembly instruction.
4. The linked list processing method of claim 3, wherein assigning addresses of IO data to be entered into a table to the linked list comprises:
assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list;
obtaining the access right of the memory address of the table tail pointer of the linked list;
when the access right is obtained, locking the access right of the table tail pointer of the linked list by other buses through a lock bus assembly instruction;
and assigning the tail address of the IO data to be entered into the table to a table tail pointer of the linked list.
5. The linked list processing method of claim 1, wherein assigning addresses of IO data to be entered into a table to the linked list comprises:
judging whether a head pointer of the linked list is null or not;
if so, assigning the head address of the IO data to be entered into the table to a table head pointer of the linked list;
and if not, assigning the head address of the IO data to be entered into the table to a table tail pointer of the linked list.
6. The linked list processing method of claim 1, wherein operating the IO data based on the linked list comprises:
and performing read-write operation on the IO data based on the linked list.
7. The linked list processing method as set forth in claim 1, further comprising:
when other threads acquire the access right of the head pointer of the linked list, taking down the old element corresponding to the head pointer of the linked list;
and pointing the head pointer of the linked list to a new element.
8. A multithreaded linked list processing apparatus, comprising:
the access right acquisition module is used for acquiring the access right of the memory address of the linked list;
the bus locking module is used for locking the access right of the linked list by other buses through a bus assembly instruction when the access right is obtained;
the address assignment module is used for assigning the address of the IO data to be entered into the table to the linked list;
and the linked list operation module is used for operating the IO data based on the linked list.
9. A server, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the linked list processing method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the linked list processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111257317.XA CN114020658A (en) | 2021-10-27 | 2021-10-27 | Multithreading linked list processing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111257317.XA CN114020658A (en) | 2021-10-27 | 2021-10-27 | Multithreading linked list processing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114020658A true CN114020658A (en) | 2022-02-08 |
Family
ID=80058300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111257317.XA Pending CN114020658A (en) | 2021-10-27 | 2021-10-27 | Multithreading linked list processing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114020658A (en) |
-
2021
- 2021-10-27 CN CN202111257317.XA patent/CN114020658A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8973004B2 (en) | Transactional locking with read-write locks in transactional memory systems | |
US8539168B2 (en) | Concurrency control using slotted read-write locks | |
US7962923B2 (en) | System and method for generating a lock-free dual queue | |
US7421544B1 (en) | Facilitating concurrent non-transactional execution in a transactional memory system | |
US8166255B2 (en) | Reservation required transactions | |
US8302105B2 (en) | Bulk synchronization in transactional memory systems | |
CN110888727B (en) | Method, device and storage medium for realizing concurrent lock-free queue | |
US8141076B2 (en) | Cell processor methods and apparatus | |
US9086911B2 (en) | Multiprocessing transaction recovery manager | |
JP2010524133A (en) | Transactional memory using buffered writes and forced serialization order | |
KR100902977B1 (en) | Hardware sharing system and method | |
CN110597606B (en) | Cache-friendly user-level thread scheduling method | |
US20020138706A1 (en) | Reader-writer lock method and system | |
US9569265B2 (en) | Optimization of data locks for improved write lock performance and CPU cache usage in multi core architectures | |
CN111459691A (en) | Read-write method and device for shared memory | |
CN112416556B (en) | Data read-write priority balancing method, system, device and storage medium | |
US20110093663A1 (en) | Atomic compare and write memory | |
JP6468053B2 (en) | Information processing apparatus, parallel processing program, and shared memory access method | |
US20230252081A1 (en) | Scalable range locks | |
CN114020658A (en) | Multithreading linked list processing method and related device | |
US11822815B2 (en) | Handling ring buffer updates | |
US7996848B1 (en) | Systems and methods for suspending and resuming threads | |
US20130166887A1 (en) | Data processing apparatus and data processing method | |
US7447875B1 (en) | Method and system for management of global queues utilizing a locked state | |
CN115061818A (en) | Multithreading linked list operating system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |