CN104572568B - Read lock operation method, write lock operation method and system - Google Patents

Read lock operation method, write lock operation method and system Download PDF

Info

Publication number
CN104572568B
CN104572568B CN201310482117.3A CN201310482117A CN104572568B CN 104572568 B CN104572568 B CN 104572568B CN 201310482117 A CN201310482117 A CN 201310482117A CN 104572568 B CN104572568 B CN 104572568B
Authority
CN
China
Prior art keywords
lock
core
read
write
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310482117.3A
Other languages
Chinese (zh)
Other versions
CN104572568A (en
Inventor
席华锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oceanbase Technology Co Ltd
Original Assignee
ANT Financial Hang Zhou Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANT Financial Hang Zhou Network Technology Co Ltd filed Critical ANT Financial Hang Zhou Network Technology Co Ltd
Priority to CN202111082328.9A priority Critical patent/CN113835901A/en
Priority to CN201310482117.3A priority patent/CN104572568B/en
Publication of CN104572568A publication Critical patent/CN104572568A/en
Application granted granted Critical
Publication of CN104572568B publication Critical patent/CN104572568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application discloses a read lock operation method, a write lock operation method and a system. The reading lock operation method comprises the following steps: setting a private reference count corresponding to each core; and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores. By using the embodiments of the application, when threads of different cores read the same data, the private reference counting operation corresponding to the core can be independently performed. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. And the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the cores add and read the lock simultaneously, thereby improving the execution efficiency.

Description

Read lock operation method, write lock operation method and system
Technical Field
The present application relates to the field of computer system architecture technologies, and in particular, to a read lock operation method, a write lock operation method, and a system.
Background
A thread (thread), also called a Lightweight Process (LWP), is a single sequence of control flows in a Process and serves as the smallest unit of program execution. In an operating system incorporating threads, a process is generally used as a basic unit for allocating resources, and a thread is used as a basic unit for independent operation and independent scheduling. Threads may be executed concurrently, e.g., multiple threads in a process may be executed concurrently. Threads in different processes can also execute concurrently. In particular, in a multi-compute core computer system, such as a computer system having multiple CPU cores, threads of different cores may also execute concurrently.
When multiple threads execute concurrently, the same data often needs to be accessed. From the accessed data, the data is shared to different threads. When multiple threads access the shared data, the integrity of the shared data needs to be guaranteed. For example, two threads cannot modify shared data at the same time; one thread cannot read the shared data that is modified in half. The classical approach is to use a Lock (Lock) mechanism. For example, a "read lock" is added to data during a read operation by a thread, and a "write lock" is added to data during a write operation by a thread. Before a process reads a datum, a read lock is added to the datum, and after the read operation is finished, the read lock is unlocked. Similarly, before a process performs a write operation on a piece of data, a write lock is applied to the data, and after the write operation is completed, the write lock is released. Read _ ref is typically used as the reference count for a read thread and the writer _ ID is used to represent the ID of a write thread.
For read operations that are performed on the same data by different threads, multiple read locks may be added. For example, if thread 1 is to perform a read operation on a data, before performing the read operation, the data is read after adding a read lock, specifically, adding 1 to the value of read _ ref (e.g., the data type of read _ ref is shaped and the initial value is 0). In the process of reading, thread 2 also performs a read operation on the same data, and adds 1 to the value of read _ ref and reads the data. The value of read _ ref at this time is 2. After the read operation for thread 1 is completed, the value of read _ ref is decremented by 1 and the read lock is unlocked. At this time, the value of read _ ref is 1. After that, thread 2 finishes the read operation on the data, subtracts 1 from the value of read _ ref, and unlocks the read lock. At this time, the value of read _ ref is 0. The read locks for the same data can be duplicated, so that the read locks are shared.
For write operations that are performed on the same data by different threads, a write lock can only be applied once. For example, if thread 1 is to perform a write operation on data, before performing the write operation, the data is locked by writing, specifically, the value of the writer _ ID is updated to the ID of thread 1 (for example, the data type of the writer _ ID is a shape and the initial value is 0; the ID of any thread is not 0), and then the data is written. During writing, the thread 2 also performs writing operation on the same data, but since the value of the writer _ ID is not 0 in this case, the thread 2 cannot add a write lock and cannot write the data. After the write operation of thread 1 is completed, the lock is unlocked, i.e., the value of the writer _ ID is updated to 0. Thread 2 knows that the value of writer _ ID is 0 at this time after the previous write lock failure and waiting for a period of time, and can add the write lock. Thereafter, the value of the writer _ ID is updated to the ID of the thread 2, and then the data is written. After the write operation of thread 2 is completed, the lock is unlocked, i.e., the value of the writer _ ID is updated to 0. As can be seen, write locks on the same data cannot be duplicated, and thus, there is mutual exclusivity between write locks.
In addition, the write lock and the read lock are mutually exclusive, namely, at any time, the write lock cannot be added again when the read lock is added to the same data, and the write lock cannot be added again when the write lock is added. Thus, before a thread reads data, it needs to check whether the writer _ ID value of the data is 0. If 0, the read operation can be performed; if not 0, it is necessary to wait for the writer _ ID value to become 0. Similarly, before writing a data, a thread needs to check whether the read _ ref value of the data is 0. If the value is 0, the write operation can be carried out; if not 0, it is necessary to wait for the read _ ref value to become 0. In fact, for the add-read lock, in order to further avoid the add-write lock operation performed by another thread for the write operation between the operation of checking the writer _ ID value and the corresponding read operation, i.e. to avoid the collision detection failure caused in this case, after adding 1 to the value of read _ ref, it will be checked again whether the writer _ ID value at this time is 0. If not 0, then a read operation is performed. Similarly, for the write lock, in order to further avoid the read lock operation performed by another thread for a read operation between the operation of checking the read _ ref value and the corresponding write operation, i.e. to avoid the failure of the collision detection caused in this case, after the value of the writer _ ID is updated to the ID of the write thread, it is checked again whether the read _ ref value at this time is 0. If not 0, then the write operation is performed.
The thread changes the values of read _ ref and writer _ ID, belonging to atomic operations. Atomic operations are typically instructions provided by the CPU with atomicity. When one thread executes an atomic operation, the thread cannot be interrupted by other threads and cannot be switched to other threads. In other words, such an atomic operation, once started, runs until the operation ends.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
in a multi-compute core computer system, threads of different cores may perform read and write operations on the same data. In particular, it is often the case that a large number of read operations are performed on the same data over a period of time, without a write operation. Each core typically corresponds to a cache. Each core maintains a read _ ref value in its corresponding cache. Furthermore, according to the prior art implementation, the read _ ref value in the cache corresponding to each core needs to be kept consistent. Thus, for a multi-compute core computer system, once a read _ ref value in a cache memory corresponding to one core changes, it communicates with other cores to notify of the change. And after receiving the notification, other cores update the read _ ref value in the corresponding caches of the other cores.
Thus, in the prior art, when a plurality of threads of different cores read the same data, since communication between the cores takes a certain time, the atomic operation for changing the read _ ref value in the cache corresponding to each core takes a certain time, and thus the execution efficiency is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a read lock operation method, a write lock operation method and a system, so as to improve execution efficiency.
To solve the foregoing technical problem, an embodiment of the present application provides a read lock operation method, a write lock operation method, and a system, which are implemented as follows:
a method of read lock operation comprising:
setting a private reference count corresponding to each core;
and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores.
A read lock operating system comprises a data unit, a cache unit, a second cache unit, a first computational core and a second computational core, wherein,
a data unit for storing data;
a first cache unit to store an assigned first private reference count for a first compute core;
a second cache unit to store a second private reference count assigned for a second compute core;
the first computing core and the second computing core are used for reading the same data in the data unit; and the number of the first and second electrodes,
in the process of reading the data by the thread of the first computing core, performing reading lock adding and reading lock reading operations by the private reference count corresponding to the first core;
and in the process of reading the data by the thread of the second computing core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
A write lock operation method, comprising:
before the data is written, judging whether a read operation process for the data exists in all the computing cores;
before the data is written, judging whether the data is in another writing operation process;
and if the two judgment results are negative, performing write lock adding and write lock releasing operations by using the global write lock in the process of performing write operation on the data.
A write lock operation system comprises a data unit, a first judgment unit, a second judgment unit, and an add/unlock unit, wherein,
a data unit for storing data;
the first judgment unit is used for judging whether a read operation process of the data exists in all the computing cores before the write operation of the data is executed;
the second judging unit is used for judging whether the data is in the process of another write operation before the write operation is executed on the data;
and the write lock adding and unlocking unit is used for performing write lock adding and write lock unlocking operations by using the global write lock in the process of performing write operation on the data under the condition that the judgment results of the first judgment unit and the second judgment unit are both negative.
According to the technical scheme provided by the embodiment of the application, when threads of different cores read the same data, the embodiment of the application independently performs private reference counting operation corresponding to the core. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. And the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the cores add and read the lock simultaneously, thereby improving the execution efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart of one embodiment of a method for read lock operation of the present application;
FIG. 2 is a block diagram of one embodiment of a read lock operating system of the present application;
FIG. 3 is a flow chart of one embodiment of a write lock operation method of the present application;
FIG. 4 is a block diagram of one embodiment of a write lock operating system of the present application.
Detailed Description
The embodiment of the application provides a read lock operation method, a write lock operation method and a system.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of a method of operating a read lock of the present application is first described.
FIG. 1 is a flow chart illustrating one embodiment of a method for read lock operation of the present application. As shown in fig. 1, the method of this embodiment includes:
s100: the private reference count corresponding to each core is set.
Modern CPUs employ a number of techniques to counteract the latency associated with memory accesses. During the period of reading and writing the memory data, the CPU can execute hundreds of instructions. A multi-level Static Random Access Memory (SRAM) cache (hereinafter, referred to as a cache) is a main means for reducing the influence caused by such delay.
For example, for a dual core computer system, core1 and core2 have corresponding cache 1 and cache 2, respectively. The cache may be a cache of a compute core. For example, for a CPU, the CPU often has a first level cache, a second level cache, and even some CPUs have a third level cache. For a CPU including a first-level cache and a second-level cache, data to be operated by the CPU is read into the second-level cache from a memory, then read into the first-level cache from the second-level cache, and then read into the CPU from the first-level cache for execution. Generally, the closer to the memory of the CPU, the faster the speed, but the higher the cost; the further away from the memory of the CPU, the slower the speed, but the cheaper the cost. Data frequently read and written by the CPU is generally stored in a memory close to the CPU, so that the utilization rate of the memory with high manufacturing cost is improved.
In this step, preferably, the private reference count (private _ read _ ref) may be placed in the cache. For example, the private reference count may be set in the CPU's level one cache. Of course, depending on the architecture of the CPU and the capacity of the memories at different levels, the private reference count may also be set in the second level cache, or in other memories with read speeds on the same order as the atomic operating speed of the CPU. This is not a particular limitation in the embodiments of the present application. In fact, caches are often transparent to the program, i.e., the program has no way to control whether a variable is to be placed in the cache, and in which level of cache it is to be placed. When a program needs to operate a variable, the CPU can check whether the variable is in the primary buffer, and if the variable is in the primary buffer, the CPU can directly read the variable from the primary buffer; if not, it may be checked whether in the level two cache: if yes, the variable is loaded into the first level cache from the second level cache, and if the second level cache does not exist, the variable is loaded into the second level cache and the first level cache from the memory.
In the prior art, read operations of different threads on the same data involve the same reference count, i.e. the same reference count operation. This reference count, read _ ref, is referred to as global read _ ref according to general computer domain rules. In particular, different threads of the same compute kernel, including different threads in the same process, or different threads in different processes, perform a self-add (++) or self-subtract (- -) operation on the same global read _ ref when reading the same data. If in a multi-core computer system, for the case of multiple cores, only one global reference count is still used, problems arise as analyzed in the background.
In this step, for different cores, a private application count is set for each core. For example, for core1, its corresponding private reference count is set, e.g., as read _ ref _ core 1; for core2, its corresponding private reference count is also set, e.g., as read _ ref _ core 2. For the case where other cores are also included, and so on.
The corresponding private reference count for each core may not be permanently (or referred to as fixed) assigned, but may be temporarily assigned. For example, the allocation may be made before the thread of each core first locks the data; the private reference count is retired after a read operation of the data by a thread of the core is completed. Specifically, an array of private reference counts [ read _ ref ] may be set. Before the thread of each core first locks the data, it applies to allocate one of the [ read _ ref ] arrays. The space of the array [ read _ ref ] can be set large enough. Each entry in the array may be set to shaping (int). The initial value of each entry in the array may be initialized to 0. Of course, for a read operation of a certain data, each entry in the [ read _ ref ] array may also be fixedly allocated to each core.
Preferably, in actual operation, each entry in the [ read _ ref ] array may be allocated to one cache line in the cache. The cache line is the minimum unit for the multi-core CPU to maintain cache consistency and is also the actual unit of memory exchange. In practice, one cache line on most platforms is larger than 8 bytes, and most cache lines are 64 bytes. If the [ read _ ref ] array is defined to be int type, then 8 bytes. As can be seen, one cache line may store 8 read _ ref. If more than one read _ ref is stored in a cache line, there will be conflicts when operating on different elements in the array. To avoid conflicts, each read _ ref in the [ read _ ref ] array may be stored in one cache line. For example, each entry in the [ read _ ref ] array may be declared as a structure, with a structure size of 64 bytes. Thus, each entry in the [ read _ ref ] array is exclusive to one cache line, thereby avoiding conflicts during operation.
S110: and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores.
For example, the same computer system includes 2 computing cores, core1 and core 2. As another example, core1 and core2 both read the same data. According to S100, core1 may apply for 1 private reference count, labeled read _ ref _ core 1; similarly, core2 may also apply for 1 private reference count each, such as read _ ref _ core 2.
In this way, in the process of reading the data by the thread of the core1, the read lock is first added. That is, the private reference count read _ ref _ core1 of core1 performs an add 1 operation. Thus, the read _ ref _ core1 changes from the initial value 0 to 1. The thread of core1 then reads the data. After the read operation is completed, the reading lock operation is performed. That is, the private reference count read _ ref _ core1 of core1 performs a subtract 1 operation. Thus, read _ ref _ core1 changes from 1 to 0.
Similarly, in the process of reading the data by the thread of the core2, the read lock is firstly added. That is, the private reference count read _ ref _ core2 of core2 performs an add 1 operation. Thus, the read _ ref _ core2 changes from the initial value 0 to 1. The thread of core2 then reads the data. After the read operation is completed, the reading lock operation is performed. That is, the private reference count read _ ref _ core2 of core2 performs a subtract 1 operation. Thus, read _ ref _ core2 changes from 1 to 0.
By adopting the above manner in the embodiment of the application, when threads of different cores read the same data, the private reference counting operation corresponding to the core is performed independently. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. Moreover, the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the core add and decode the lock simultaneously.
In addition, the private reference counts corresponding to different cores do not need to be synchronized among the cores, and a communication process among the cores is omitted, so that expenses of bandwidth, time and the like required by inter-core communication are omitted.
The S110 may specifically include the following steps:
s111: and in the process of reading the data by the thread of the first core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the first core.
S112: and in the process of reading the data by the thread of the second core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
The operation of locking with a read lock by using the private reference count specifically includes: the processes of different cores perform an add-1 operation with the private reference count corresponding to each core. The operation of reading the lock with the private reference count specifically includes: and the processes of different cores execute the minus 1 operation according to the private reference count corresponding to each core. Between the read lock and read lock operations, the data may be read by the processes of each core.
It should be noted that, in the process of performing a read operation on the same data by a plurality of different threads of the same core, the read lock adding operation and the read lock reading operation may be performed by the same private counter. For example, in the process of reading the data by thread 1 of core1, a read lock is first added. The private reference count read _ ref _ core1 of core1 performs an add 1 operation. Thus, the read _ ref _ core1 changes from the initial value 0 to 1. Thread 1 of core1 then reads the data. In the process of reading the data by thread 1 of core1, thread 2 of core1 also performs a read operation on the same data, and adds 1 to the value of read _ ref _ core1 and reads the data. The value of read _ ref at this time is 2. After the read operation for thread 1 of core1 is completed, the read _ ref _ core1 is decremented by 1 to unlock the lock. At this time, the value of read _ ref _ core1 becomes 1. After that, thread 2 of core1 finishes executing the read operation on the data, and subtracts 1 from the value of read _ ref _ core1 to unlock the read lock. At this time, the value of read _ ref _ core1 becomes 0. Thus, for the same core, no matter how many threads add and decode the lock simultaneously, the time for adding and decoding the lock is hardly increased.
It should be further noted that, in order to avoid data inconsistency, the read lock in the embodiment of the present application is still mutually exclusive from the write lock. For example, in a computer system with multiple cores, a global write lock is set, such as global _ write _ id. If a thread is to write to data, a write lock is placed on the data before the write is performed. For example, thread 1 of a core updates the value of global _ writer _ ID to the ID of thread 1 (e.g., the data type of global _ writer _ ID is reshaped and the initial value is 0; the ID of any thread is not 0), and then writes the data. During a write, a thread of a core (which may be the same core or a different core from the previous write lock thread), referred to herein as thread 2, performs a read operation on the same data, applying for a private reference count corresponding to the core. The private reference count is initialized to 0, for example. However, since the value of global _ writer _ id is not 0 at this time, thread 2 cannot add a read lock and cannot read the data. After the write operation of thread 1 is completed, the lock is unlocked, i.e. the value of global _ writer _ id is updated to 0. After thread 2 fails to add the read lock for the previous time and waits for a period of time, it knows that the value of global _ writer _ id is 0 at this time, and can add the read lock. Thread 2 may also try to add a read lock at regular intervals after the previous read lock addition fails; when the value of global _ writer _ id is 0, retry read lock is successful. Thus, thread 2 increments the value of the private reference count to which it applies by 1, and then reads the data. The private reference count for thread 2 at this point has a value of 1. After the read operation of thread 2 is completed, the lock is interpreted, i.e., the value of the corresponding private reference count is decremented by 1 to 0.
Based on this, before the thread of the different core in S110 performs the read lock operation on the corresponding private reference count, the method may further include:
s101: the thread of the different core checks whether the data is in the process of writing operation, and if the check result is no, the execution is triggered to S110.
Whether a write operation is in progress may be accomplished by checking the status of the global write lock. For example, it may be checked whether the global write lock is 0, and S110 is performed when the check result is 0.
Conversely, if the value of the check global write lock is not 0, it means that there is currently a write operation to the data. Based on the mutually exclusive rows of the write lock and the read lock, the read lock cannot be added to the data, and the data cannot be read. In this case, S110 needs to be executed after waiting for the global write lock to become 0.
S101 may be executed after S100 or before S100.
It should be noted that, for the add-read lock, in order to further avoid an add-write lock operation performed by another thread for a write operation between the operation of checking the global _ writer _ id value and the corresponding read operation, that is, to avoid failure of collision detection caused in this case, after adding 1 to the value of the private reference count corresponding to the core, it will be checked again whether the current global _ writer _ id value is 0. If not 0, then a read operation is performed.
One embodiment of the read lock operating system of the present application is described below. Fig. 2 shows a block diagram of an embodiment of the system.
As shown in fig. 2, the read lock operating system in an embodiment of the present application includes a first computing core 11a, a second computing core 11b, a first cache unit 12a, a second cache unit 12b, and a data unit 13, where each of the computing cores corresponds to a unique cache unit.
Wherein:
a data unit 13 for storing data;
a first cache unit 12a, configured to store a first private reference count allocated for the first computing core;
a second cache unit 12b for saving the allocated second private reference count for the second computational core;
a first computing core 11a and a second computing core 11b for reading the same data in the data unit; and the number of the first and second electrodes,
in the process of reading the data by the thread of the first computing core 11a, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the first core;
and in the process of reading the data by the thread of the second computing core 11b, performing read lock adding and read lock reading operations by using the private reference count corresponding to the second core.
Wherein:
the first cache unit 12a may be a cache of a first computing core;
the second cache unit 12b may be a cache of a second computing core.
In the foregoing method embodiment, the private reference count corresponding to each core is set, and the private reference count corresponding to each core may be allocated before the thread of each core performs the first read lock on the data, or the private reference count of each core may also be fixedly allocated. For example, an array of private reference counts [ read _ ref ] may be set. Before the thread of each core first locks the data, it applies to allocate one of the [ read _ ref ] arrays. The space of the array [ read _ ref ] can be set large enough. Each entry in the array may be set to shaping (int). The initial value of each entry in the array may be initialized to 0. Of course, for a read operation of a certain data, each entry in the [ read _ ref ] array may also be fixedly allocated to each core. Preferably, in actual operation, each entry in the [ read _ ref ] array may be allocated to a cache line (cacheline) in the cache. The cache line is the minimum unit for the multi-core CPU to maintain cache consistency and is also the actual unit of memory exchange. In practice, one cache line on most platforms is larger than 8 bytes, and most cache lines are 64 bytes. If the [ read _ ref ] array is defined to be int type, then 8 bytes. As can be seen, one cache line may store 8 read _ ref. If more than one read _ ref is stored in a cache line, there will be conflicts when operating on different elements in the array. To avoid conflicts, each read _ ref in the [ read _ ref ] array may be stored in one cache line. For example, each entry in the [ read _ ref ] array may be declared as a structure, with a structure size of 64 bytes. Thus, each entry in the [ read _ ref ] array is exclusive to one cache line, thereby avoiding conflicts during operation.
In combination with the above, in an embodiment of the read lock operating system of the present application, caches of different cores may correspond to different cache lines. For example, the first cache unit corresponds to a first cache line, and the second cache unit corresponds to a second cache line.
In the embodiment of the read lock operating system, the read lock operating system may further include a checking unit 14, configured to check whether the data is in a write operation process, and if not, trigger each computing core to perform read lock adding and read lock reading operations on the corresponding private reference count.
The operation of locking the private reference count includes: the process of each core performs an add-1 operation on the private reference count corresponding to that core. The operation of reading the lock on the private reference count specifically includes: and the process of each core performs 1 subtracting operation on the private reference count corresponding to the core. Between the read lock and read lock operations, the data may be read by the processes of each core.
One embodiment of a write lock operation method of the present application is described below. Fig. 3 shows a flow chart of an embodiment of the method. As shown in fig. 3, an embodiment of a write lock operation method of the present application includes:
s300: before the data is written, whether a read operation process for the data exists in all the computing cores is judged.
The determining whether all the computing cores have the reading process for the data may be specifically implemented by facilitating whether the private reference count of each computing core corresponding to the data is 0. If the value is 0, the data is in the process of reading operation; if not 0, this indicates that the data is not in the process of a read operation.
S310: before the data is written, whether the data is in another writing operation process is judged.
S310 may be specifically implemented by determining whether the global write lock for the data is 0. If 0, it indicates that the data is not in the process of another write operation; if not 0, another write operation procedure is indicated.
S320: and if the judgment results of the S310 and the S320 are both negative, performing the operations of adding a write lock and removing the write lock by using the global write lock in the process of performing the write operation on the data.
Specifically, before a write operation is performed, a write lock is applied to the data; after the write operation, the lock is unlocked for the data.
The global variable in S320 is, for example, global _ writer _ id. The write lock can update the value of global _ writer _ ID to the ID of the write thread; the value of global _ writer _ id may be updated to 0 to unlock the write lock.
Similarly, for the write lock, in order to further avoid the read lock operation performed by another thread for the read operation between the operation of checking each core private reference count value and the corresponding write operation, i.e. to avoid the failure of the collision detection caused in this case, after the value of global _ writer _ ID is updated to the ID of the write thread, it will be checked again whether each core private reference count value at this time is 0. If not 0, then the write operation is performed.
The write lock operation method may be based on the read lock operation method or the read lock operation system.
One embodiment of the write lock operating system of the present application is described below. Fig. 4 shows a block diagram of an embodiment of the system. As shown in FIG. 4, the embodiment of the write lock operating system of the present application includes:
a data unit 3 for storing data;
a first judgment unit 21a configured to judge whether there is a read operation process on data in all the computing cores before performing a write operation on the data;
a second judging unit 21b configured to judge whether data is in another write operation process before performing a write operation on the data;
and an add/write lock/unlock unit 22, configured to, when the determination results of the first determination unit and the second determination unit are both negative, perform write lock/unlock operations with the global write lock in a process of performing write operations on the data.
Specifically, before a write operation is performed, a write lock is applied to the data; after the write operation, the lock is unlocked for the data.
The global variable is, for example, global _ writer _ id. The write lock can update the value of global _ writer _ ID to the ID of the write thread; the value of global _ writer _ id may be updated to 0 to unlock the write lock.
The write lock operating system may be based on the read lock operating method or the read lock operating system.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (16)

1. A method for operating a read lock, applied to a computer system with multiple computing cores, the method comprising:
setting a private reference count corresponding to each core aiming at the specified data;
in the process of reading the specified data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores;
when the thread of the first core performs the read locking operation on the specified data and the thread of the second core does not perform the read locking operation on the specified data, the value of the private reference count corresponding to the first core is increased by 1, and the value of the private reference count corresponding to the second core is not increased; when the thread of the first core performs the reading lock operation on the specified data and the thread of the second core does not perform the reading lock operation on the specified data, the value of the private reference count corresponding to the first core is decreased by 1, and the value of the private reference count corresponding to the second core is not decreased.
2. The method of read lock operations as claimed in claim 1, wherein the setting of the private reference count for each core comprises:
distributing a private reference count corresponding to each core before the thread of each core performs first read lock on the specified data, and recovering the private reference count after the thread of each core finishes the execution of the read operation on the specified data; or the like, or, alternatively,
the private reference count for each core is fixedly assigned.
3. The method of read lock operation as claimed in claim 1 or 2 wherein said setting a private reference count for each core comprises:
an array of private reference counts is set, and each entry in the array is assigned to a core.
4. The method of read lock operations as claimed in claim 3 wherein the setting of the private reference count for each core comprises:
each entry in the reference count array is allocated to a cache line in the cache.
5. The method for performing read lock operation according to claim 1, wherein during the process of reading the same data by the threads of different cores, performing read lock and read lock reading operations by using the private reference counts corresponding to different cores includes:
in the process that the thread of the first core reads the specified data, the operation of adding a reading lock and reading a reading lock is carried out according to the private reference count corresponding to the first core;
and in the process of reading the specified data by the thread of the second core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
6. The method of claim 1, wherein the reading lock operation,
the operation of adding the read lock by using the private reference counts corresponding to the different cores specifically comprises the following steps: the process of different cores executes the operation of adding 1 according to the private reference count corresponding to each core;
the operation of reading the lock by the private reference counts corresponding to different cores specifically includes: and the processes of different cores execute the minus 1 operation according to the private reference count corresponding to each core.
7. The method for operating a read lock according to claim 1, wherein before the adding a read lock, the private reference counts corresponding to different cores further comprise:
the thread of the different core checks whether the specified data is in the process of a write operation, and the check result is no.
8. The method for a read lock operation as claimed in claim 6, wherein after the process of the different cores performs the add-1 operation with the private reference count corresponding to each core, and before the read operation, the method further comprises:
the thread of the different core checks whether the specified data is in the process of a write operation, and the check result is no.
9. A read lock operating system comprises a data unit, a first cache unit, a second cache unit, a first computational core and a second computational core, wherein,
a data unit for storing data;
a first cache unit to store an assigned first private reference count for a first compute core;
a second cache unit to store a second private reference count assigned for a second compute core;
the first computing core and the second computing core are used for reading the specified data in the data unit; and the number of the first and second electrodes,
in the process that the thread of the first computing core reads the specified data, the private reference count corresponding to the first core is used for performing reading locking and reading locking operations;
in the process that the thread of the second computing core reads the specified data, the operation of adding a reading lock and reading a reading lock is carried out according to the private reference count corresponding to the second core;
when the thread of the first core performs the read locking operation on the specified data and the thread of the second core does not perform the read locking operation on the specified data, the value of the private reference count corresponding to the first core is increased by 1, and the value of the private reference count corresponding to the second core is not increased; when the thread of the first core performs the reading lock operation on the specified data and the thread of the second core does not perform the reading lock operation on the specified data, the value of the private reference count corresponding to the first core is decreased by 1, and the value of the private reference count corresponding to the second core is not decreased.
10. The read lock operating system of claim 9, wherein:
the first cache unit is a cache of a first compute core;
the second cache unit is a cache of a second compute core.
11. The read lock operating system of claim 10, wherein:
the first cache unit corresponds to a first cache line;
the second cache unit corresponds to a second cache line.
12. The read lock operating system of claim 9, further comprising a checking unit configured to check whether the data is in a write operation process, and if not, to trigger read lock and read lock interpretation operations of the respective private reference counts by the respective computing cores.
13. A write lock operation method applied to a computer system with multiple computing cores, wherein for specified data, a private reference count corresponding to each computing core is set so as to implement the read lock operation method for the specified data according to any one of claims 1 to 8, and the write lock operation method comprises the following steps:
before the write operation is executed on the specified data, whether a read operation process for the data exists in all the computing cores is judged by traversing the private reference counts corresponding to the computing cores;
before the specified data is subjected to write operation, judging whether the data is in another write operation process by judging a global write lock aiming at the specified data;
and if the two judgment results are negative, performing write lock adding and write lock releasing operations by using the global write lock in the process of performing write operation on the data.
14. The write lock operation method of claim 13,
the operation of performing write lock with global write lock specifically includes: updating the value of the global write lock variable to be the ID of the write thread;
the performing write unlock operation with global write lock specifically includes: the value of the global write lock variable is updated to 0.
15. The write lock operation method of claim 14, wherein after updating the value of the global write lock variable to the ID of the write thread and before performing the write operation, further comprising:
the core private reference count values at this time are checked, and the check result is 0.
16. A write lock operation system is characterized by comprising a data unit, a first judgment unit, a second judgment unit and an adding and releasing write lock unit, wherein aiming at specified data, a private reference count corresponding to each computation core is set so as to realize the read lock operation method aiming at the specified data according to any one of claims 1 to 8,
a data unit for storing data;
the first judgment unit is used for judging whether a read operation process for the data exists in all the computing cores by traversing the private reference counts corresponding to the computing cores before the write operation is executed on the specified data stored in the data unit;
a second judging unit, configured to judge whether the data is in another write operation process by judging a global write lock for the specified data before performing a write operation on the specified data;
and the write lock adding and unlocking unit is used for performing write lock adding and write lock unlocking operations by using the global write lock in the process of performing write operation on the data under the condition that the judgment results of the first judgment unit and the second judgment unit are both negative.
CN201310482117.3A 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system Active CN104572568B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111082328.9A CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN201310482117.3A CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310482117.3A CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111082328.9A Division CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Publications (2)

Publication Number Publication Date
CN104572568A CN104572568A (en) 2015-04-29
CN104572568B true CN104572568B (en) 2021-07-23

Family

ID=53088677

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111082328.9A Pending CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN201310482117.3A Active CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111082328.9A Pending CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Country Status (1)

Country Link
CN (2) CN113835901A (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094840B (en) * 2015-08-14 2019-01-29 浪潮(北京)电子信息产业有限公司 A kind of atomic operation implementation method and device based on buffer consistency principle
CN105955804B (en) 2016-04-22 2018-06-05 星环信息科技(上海)有限公司 A kind of method and apparatus for handling distributed transaction
US20180232304A1 (en) * 2017-02-16 2018-08-16 Futurewei Technologies, Inc. System and method to reduce overhead of reference counting
US20180260255A1 (en) * 2017-03-10 2018-09-13 Futurewei Technologies, Inc. Lock-free reference counting
CN108388424B (en) * 2018-03-09 2021-09-21 北京奇艺世纪科技有限公司 Method and device for calling interface data and electronic equipment
CN110704198B (en) * 2018-07-10 2023-05-02 阿里巴巴集团控股有限公司 Data operation method, device, storage medium and processor
CN109271258B (en) * 2018-08-28 2020-11-17 百度在线网络技术(北京)有限公司 Method, device, terminal and storage medium for realizing re-entry of read-write lock
CN109656730B (en) * 2018-12-20 2021-02-23 东软集团股份有限公司 Cache access method and device
CN111459691A (en) * 2020-04-13 2020-07-28 中国人民银行清算总中心 Read-write method and device for shared memory
CN111597193B (en) * 2020-04-28 2023-09-26 广东亿迅科技有限公司 Tree data locking and unlocking method
CN111782609B (en) * 2020-05-22 2023-10-13 北京和瑞精湛医学检验实验室有限公司 Method for rapidly and uniformly slicing fastq file
CN111913810B (en) * 2020-07-28 2024-03-19 阿波罗智能技术(北京)有限公司 Task execution method, device, equipment and storage medium in multithreading scene
CN112346879B (en) * 2020-11-06 2023-08-11 网易(杭州)网络有限公司 Process management method, device, computer equipment and storage medium
CN113791916B (en) * 2021-11-17 2022-02-08 支付宝(杭州)信息技术有限公司 Object updating and reading method and device
CN115202884B (en) * 2022-07-26 2023-08-22 江苏安超云软件有限公司 Method for adding read write lock of high-performance system based on polling and application
CN115599575B (en) * 2022-09-09 2024-04-16 中电信数智科技有限公司 Novel method for solving concurrent activation and deactivation of cluster logical volumes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292881B1 (en) * 1998-03-12 2001-09-18 Fujitsu Limited Microprocessor, operation process execution method and recording medium
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102681892A (en) * 2012-05-15 2012-09-19 西安热工研究院有限公司 Key-Value type write-once read-many lock pool software module and running method thereof
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100583832C (en) * 2007-03-30 2010-01-20 华为技术有限公司 Data management method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292881B1 (en) * 1998-03-12 2001-09-18 Fujitsu Limited Microprocessor, operation process execution method and recording medium
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102681892A (en) * 2012-05-15 2012-09-19 西安热工研究院有限公司 Key-Value type write-once read-many lock pool software module and running method thereof
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Also Published As

Publication number Publication date
CN113835901A (en) 2021-12-24
CN104572568A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104572568B (en) Read lock operation method, write lock operation method and system
US8954986B2 (en) Systems and methods for data-parallel processing
US8881153B2 (en) Speculative thread execution with hardware transactional memory
US8364909B2 (en) Determining a conflict in accessing shared resources using a reduced number of cycles
RU2501071C2 (en) Late lock acquire mechanism for hardware lock elision (hle)
US7899997B2 (en) Systems and methods for implementing key-based transactional memory conflict detection
US8645963B2 (en) Clustering threads based on contention patterns
US20160154677A1 (en) Work Stealing in Heterogeneous Computing Systems
US10579413B2 (en) Efficient task scheduling using a locking mechanism
US10282230B2 (en) Fair high-throughput locking for expedited grace periods
US11748174B2 (en) Method for arbitration and access to hardware request ring structures in a concurrent environment
US9176872B2 (en) Wait-free algorithm for inter-core, inter-process, or inter-task communication
US11170816B2 (en) Reader bias based locking technique enabling high read concurrency for read-mostly workloads
CN112306699B (en) Method and device for accessing critical resources, computer equipment and readable storage medium
US20180260255A1 (en) Lock-free reference counting
US8468169B2 (en) Hierarchical software locking
US10101999B2 (en) Memory address collision detection of ordered parallel threads with bloom filters
US10310916B2 (en) Scalable spinlocks for non-uniform memory access
CN112346879B (en) Process management method, device, computer equipment and storage medium
US11074200B2 (en) Use-after-free exploit prevention architecture
US11809319B2 (en) Contention tracking for processor cache management
KR101667426B1 (en) Lock-free memory controller and multiprocessor system using the lock-free memory controller
Shin et al. Strata: Wait-free synchronization with efficient memory reclamation by using chronological memory allocation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191213

Address after: P.O. Box 31119, grand exhibition hall, hibiscus street, 802 West Bay Road, Grand Cayman, Cayman Islands

Applicant after: Innovative advanced technology Co., Ltd

Address before: Greater Cayman, British Cayman Islands

Applicant before: Alibaba Group Holding Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210208

Address after: 801-10, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310000

Applicant after: Ant financial (Hangzhou) Network Technology Co.,Ltd.

Address before: Ky1-1205 P.O. Box 31119, hibiscus street, 802 Sai Wan Road, Grand Cayman Islands, ky1-1205

Applicant before: Innovative advanced technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210908

Address after: 100020 unit 02, 901, floor 9, unit 1, building 1, No.1, East Third Ring Middle Road, Chaoyang District, Beijing

Patentee after: Beijing Aoxing Beisi Technology Co., Ltd

Address before: 801-10, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310000

Patentee before: Ant financial (Hangzhou) Network Technology Co.,Ltd.