WO2021082665A1 - Data processing method, apparatus, device, and medium - Google Patents
Data processing method, apparatus, device, and medium Download PDFInfo
- Publication number
- WO2021082665A1 WO2021082665A1 PCT/CN2020/110753 CN2020110753W WO2021082665A1 WO 2021082665 A1 WO2021082665 A1 WO 2021082665A1 CN 2020110753 W CN2020110753 W CN 2020110753W WO 2021082665 A1 WO2021082665 A1 WO 2021082665A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread
- data
- lock
- shared cache
- processor
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
Definitions
- This application relates to the field of computer technology, and in particular to a data processing method, device, equipment and medium.
- the memory uses a lock mechanism to control thread access, such as MCS lock, and when the thread acquires the lock, it needs to exchange data with the lock, for example, it needs to exchange the pending value.
- MCS lock a lock mechanism to control thread access
- the thread acquires the lock, it needs to exchange data with the lock, for example, it needs to exchange the pending value.
- the existing lock mechanism when a thread exchanges data with a lock, it needs to determine which level of cache the lock data to be exchanged is located in, such as L1 (first level cache), L2 (second level cache), and then can go Corresponding cache for data exchange.
- the previous thread wants to unlock, it needs to wait for its own "pointer" data (info.next data) to be filled in by the next thread, and then change the pending value of the next thread, so that the previous thread can Unlock, that is, the next thread can obtain the lock, so that the two threads before and after are bundled together and affect each other; an error in one thread will affect the lock or unlock of the other thread.
- the existing lock mechanism has a complicated process and low operating efficiency, and thus the execution efficiency of the critical section is low, and the conflict is greater (that is, the thread takes longer).
- RTM Remote Transactional Memory
- the embodiments of this specification provide a data processing method, device, equipment, and medium to solve the technical problem of how to more effectively and efficiently control the lock mechanism.
- the embodiment of this specification provides a data processing method, which includes: when a lock request of a thread to be locked is received, an interaction between lock data and thread data of the thread to be locked is performed through the shared cache of the processor, according to the lock data Determine whether the lock is occupied; if not, enable the thread to be locked to obtain the lock; if so, when the target thread data of the thread that occupies the lock meets a preset condition, enable the thread to be locked to obtain the lock; and/or
- receiving the unlock request of the lock-occupiing thread it is determined whether the thread data written into the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before occupying the lock; Change, then change the thread data of the lock-occupying thread to unlock the lock-occupying thread.
- the embodiment of this specification provides a data processing method, which includes: when a lock acquisition request of a first thread is received, determining whether a lock is occupied according to data corresponding to the first data of the first thread, and determining whether the lock is occupied or not.
- the first data is written into the cache line of the lock in the shared cache of the processor; if not, the first thread is allowed to acquire the lock; if so, when the target thread data of the lock thread meets the preset condition, all The first thread acquires the lock; upon receiving the unlock request of the first thread, it is determined whether the first data exists in the shared cache; if it does not exist, the first data of the first thread is changed , Unlock the first thread; if it exists, change the first data in the shared cache to unlock the first thread.
- the embodiment of this specification provides a data processing device, including: a lock module, which is used to perform the interaction between lock data and thread data of the thread to be locked through the shared cache of the processor when receiving the lock request of the thread to be locked , Determine whether the lock is occupied according to the lock data; and, if the lock is not occupied, make the thread to be locked obtain the lock; if the lock is occupied, when the target thread data of the lock thread meets the preset condition , Enabling the thread to be locked to obtain the lock; and/or, the unlocking module, when receiving the unlock request of the thread occupying the lock, determine that the thread occupying the lock performs data interaction with the lock through the shared cache of the processor before the thread occupying the lock When the thread data written to the shared cache is changed; and, if the thread data written to the shared cache is changed during the interaction, change the thread data of the thread that occupies the lock so that all The account lock thread is unlocked.
- a lock module which is used to perform the interaction between lock data and thread
- the embodiment of this specification provides a data processing device, including: a lock module, which is used to determine whether the lock is locked according to the data corresponding to the first data of the first thread when the lock acquisition request of the first thread is received. Occupied, and write the first data into the cache line of the lock in the shared cache of the processor; and, if the lock is not occupied, enable the first thread to obtain the lock; if the lock is occupied, it will be occupied When the target thread data of the lock thread meets a preset condition, the first thread acquires the lock; the unlock module is configured to determine whether the first thread exists in the shared cache when the unlock request of the first thread is received. Data; and, if the first data does not exist, change the first data of the first thread to unlock the first thread; and, if the first data exists, then The first data in the shared cache is changed to unlock the first thread.
- a lock module which is used to determine whether the lock is locked according to the data corresponding to the first data of the first thread when the lock acquisition request of the first
- the embodiment of this specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , The instruction is executed by the at least one processor, so that the at least one processor can: when receiving a lock request of the thread to be locked, lock data and the thread of the thread to be locked through the shared cache of the processor For data interaction, determine whether the lock is occupied according to the lock data; if not, enable the thread to be locked to obtain the lock; if so, when the target thread data of the lock thread meets a preset condition, the thread to be locked The lock-acquiring thread obtains the lock; and/or, upon receiving the unlock request of the lock-occupying thread, when it is determined that the lock-occupying thread interacts with the lock through the shared cache of the processor before the lock-occupying thread, the data is written to the shared cache Whether the thread data of is changed; if it is changed, change the thread data of the lock-occupying thread
- the embodiment of this specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , The instruction is executed by the at least one processor, so that the at least one processor can: when receiving the lock request of the first thread, determine according to the lock data corresponding to the first data of the first thread Whether the lock is occupied, and the first data is written into the cache line of the lock in the shared cache of the processor; if not, the first thread is made to acquire the lock; if it is, the target thread of the thread that occupies the lock When the data meets a preset condition, the first thread is made to acquire the lock; when the unlock request of the first thread is received, it is determined whether the first data exists in the shared cache; if it does not exist, the first data is changed. The first data of the first thread unlocks the first thread; if it exists, the first data in the shared cache is changed to unlock the first thread.
- the embodiments of this specification provide a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the following steps are implemented:
- a lock request is made, the interaction between the lock data and the thread data of the thread to be locked is performed through the shared cache of the processor, and it is determined whether the lock is occupied according to the lock data; if not, the thread to be locked is made to obtain the lock; If yes, when the target thread data of the lock-occupying thread meets the preset condition, the thread to be locked is allowed to obtain the lock; and/or, when the unlock request of the lock-occupying thread is received, it is determined that the lock-occupiing thread is before the When performing data interaction with the lock through the shared cache of the processor, whether the thread data written in the shared cache is changed; if it is changed, the thread data of the lock-occupying thread is changed to unlock the lock-occupying thread .
- the embodiments of this specification provide a computer-readable storage medium that stores computer-executable instructions.
- the computer-executable instructions When the computer-executable instructions are executed by a processor, the following steps are implemented: When a lock is requested, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the cache line of the lock in the shared cache of the processor; if not , The first thread is made to acquire the lock; if so, when the target thread data of the lock thread meets the preset condition, the first thread is made to acquire the lock; when the unlock request of the first thread is received, it is determined Whether the first data exists in the shared cache; if it does not exist, the first data of the first thread is changed to unlock the first thread; if it exists, the first data in the shared cache is changed The first data unlocks the first thread.
- the above-mentioned at least one technical solution adopted in the embodiment of this specification can achieve the following beneficial effects: in the lock mechanism control process, data exchange is performed through the shared cache, which can avoid the complicated migration of lock redundancy, accelerate the execution of critical regions, and reduce the occurrence of conflicts. , At the same time, it avoids rollback during the conflict, which can simplify the lock control process and improve the lock control efficiency.
- Fig. 1 is a schematic flow chart of the data processing method in the first embodiment of this specification.
- Fig. 2 is a schematic diagram of the application of the data processing method in the first embodiment of this specification.
- Fig. 3 is a schematic diagram of the locking process in the first embodiment of this specification.
- Figure 4 is a schematic flow chart of the data processing method in the second embodiment of this specification
- Fig. 5 is a schematic diagram of the unlocking process in the second embodiment of this specification.
- Fig. 6 is a schematic flow chart of the data processing method in the third embodiment of this specification.
- FIG. 7 is a schematic diagram of the structure of the data processing device in the fourth embodiment of this specification.
- Fig. 8 is a schematic diagram of the structure of the data processing device in the fifth embodiment of this specification.
- thread has two variables: info.next (pointer) and info.pending.
- info.next pointer
- info.pending When a thread (maybe referred to as thread A) wants to acquire a lock, thread A interacts with the lock and reads Get the lock data and write your own data into the lock cache line; judge whether the lock is occupied according to the read lock data; if it is not occupied, thread A obtains the lock; if it is occupied, because of the thread currently occupying the lock ("Lock thread" for short) also has data interaction with the lock, thread A obtains the private data address of the thread currently occupying the lock through the read lock data, and then fills its own private data address into the info of the thread currently occupying the lock .next variable, and wait for the current lock thread to change the info.pending value of thread A (the lock thread obtains the private data address of thread A through the info.next variable, and then can change the data of thread A); If the value of info.pending is changed, the lock thread is unlocked, and
- a lock-occupying thread (maybe denoted as thread B) needs to be unlocked, you need to determine whether the data written in the cache line of the lock has been changed when you interact with the lock before occupying the lock; if not, it means that no other threads want to acquire the lock.
- the processor has a private cache and a shared cache. Among them, the data read by each processor core of the processor can be written into the shared cache, and each processor core can also read the data in the shared cache. Each processor core has its own private cache. For any processor core, the data in the private cache of the processor core cannot be read by other processor cores, and other processor cores cannot write data to the process. In the private cache of the processor core. In the prior art, for any thread, when it wants to acquire a lock and interacts with the lock, it will migrate the lock data to a certain level of private cache of its corresponding processor core, so that it is convenient for the next time it wants to acquire it. Data interaction with the lock during lock. In this way, the next thread that wants to acquire the lock wants to interact with the lock.
- the processor core issues an instruction to send the lock data to the processor core corresponding to "the next thread that wants to acquire a lock", and the processor core corresponding to the next thread that wants to acquire a lock will
- the lock data is stored in its own private cache, so that "the next thread that wants to acquire the lock” interacts with the lock in its own private cache (also for the convenience of data exchange with the lock the next time you want to acquire the lock) Interaction).
- the existing lock mechanism process is complicated, and the lock data needs to be continuously migrated, so the operation efficiency is low, the execution efficiency of the critical section is low, and the conflict is greater.
- the first embodiment of this specification provides a data processing method.
- the execution body of this embodiment may be a computer or a server or a corresponding data processing system or a shared cache control unit, that is, the execution body may be diverse and can be set or changed according to actual conditions.
- the corresponding application program is installed on the terminal (including but not limited to mobile phone and computer).
- the server corresponds to the application program. Data can be transmitted between the server and the terminal held by the user.
- the application program is used to display pages and information to the user. input Output.
- the data processing method in this embodiment includes steps S101 to S103.
- a lock acquisition request from any thread (may be denoted as thread C) is received (thread C may be referred to as a "thread to be locked”, and a lock acquisition request may be referred to as a "lock request")
- thread C may be referred to as a "thread to be locked”
- a lock acquisition request may be referred to as a "lock request”
- the thread data of thread C and the lock data need to be interacted (or exchanged).
- the thread data and the lock data are interacted through the shared cache (which can be the shared cache of the processor), that is, the lock data (the lock data can be set to 16 bytes or more) is stored in the shared cache.
- the shared cache may be the last level cache (Last Level Cache, LLC), so the execution subject may be the control unit of the last level cache.
- thread C carries corresponding data, and the data it carries includes but is not limited to the private data address of thread C and private data (that is, the content of the private data).
- the private data can be the pending value (or flag bit value) of thread C.
- the pending value can be initially 0 by default and can be changed.
- both the private data address and the private data can be 8 bytes or more.
- an initialization structure struct mcs_info ⁇ long pending; /*8 bytes*/ ⁇ can be set for thread C.
- the thread interaction data and the lock data used to interact with the thread interaction data (the lock data used to interact with the thread interaction data will be referred to as "lock interaction data” hereinafter) have the same bytes.
- the initial content of the lock interaction data can be empty.
- the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor includes: on the one hand, the lock interaction data is read from the shared cache, and the read lock interaction data is returned to the pipeline corresponding to thread C (ie thread The pipeline of the processor where C is located); on the other hand, the thread interaction data of thread C is written into the cache line of the lock in the shared cache, that is, the thread interaction data of thread C is used to overwrite the lock interaction data in the shared cache.
- old XCHG_OPT(&lock,&value) can be used for data interaction (old is lock interaction data), which is not limited in this embodiment. Thread interaction data belongs to thread data, lock interaction data belongs to lock data, and the interaction between thread interaction data and lock interaction data is the interaction between thread data and lock data.
- determining whether the lock is occupied according to the lock interaction data may include: if the lock interaction data is empty (for example, the lock data is still in the initialized state and has not been changed; or although it has been changed, it returns to the empty state) , The lock is not occupied; if the lock interaction data is not empty (indicating that before receiving the lock request of the thread to be locked, the data interaction between the thread and the lock through the shared cache has occurred, so that The “lock interaction data of the lock thread interaction” becomes “the thread interaction data of the last thread interacting with the lock”), then the lock is occupied.
- the thread C obtains the lock, as shown in Figure 3.
- the target thread data of the thread belongs to the "thread data used for data interaction with the lock", that is, it belongs to the "thread interaction data of the thread”.
- the target thread data may be the thread's pending value (private data), that is, info.pending.
- the target thread data of the lock thread when the target thread data of the lock thread is greater than the lock interaction data obtained by the thread to be locked, the target thread data of the lock thread meets the preset condition.
- the lock interaction data obtained by the thread to be acquired includes the pending value of the lock thread.
- the pending value of the lock thread (the pending value of the occupancy thread in the lock interaction data obtained by the lock thread to be acquired is the value of the lock interaction data obtained by the lock thread that corresponds to the target thread data of the occupant thread Data”), it can be considered that the target thread data of the lock thread is greater than the lock interaction data (obtained by the thread to be locked), and it can be considered that the target thread data of the lock thread is greater than the lock data (the lock interaction data belongs to the lock data).
- the thread C After the thread C obtains the lock, it can enter the critical area (or critical area) to perform corresponding operations, such as modifying global variables, which is not limited in this embodiment.
- the shared cache in this embodiment may be shared by any number of threads. For different threads to be locked, the locked data does not need to be migrated in the shared cache.
- the second embodiment of this specification provides a data processing method, including steps S105 to S107.
- S105 When receiving the unlock request of the lock-occupiing thread, determine whether the thread data written in the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupiing thread.
- thread D When the thread obtains the lock and performs the corresponding operation, it will receive the thread's unlock request. Assuming that thread D is a lock thread, thread D will also execute the aforementioned S101 and S103 during the process from the thread to be locked to the lock thread, so thread D will also interact with the lock in the shared cache.
- the thread interaction data of thread D written in the shared cache when it is determined that thread D interacts with the lock through the shared cache before occupying the lock, the thread interaction data of thread D written in the shared cache (the thread interaction data of thread D written to the shared cache is also Whether the lock interaction data used for data interaction with the next thread to be locked after thread D is changed.
- the unlock request of thread D is received, if the thread interaction data of thread D written in the shared cache is changed, it means that after thread D, the lock request of other threads (maybe denoted as thread E) is also received, and occurs The interaction between the thread interaction data of thread E and the lock interaction data, so that the lock interaction data in the shared cache (that is, the thread interaction data of thread D) is changed to the thread interaction data of thread E (the private data addresses of two different threads are different , So the thread interaction data of two different threads is different).
- Changing the thread interaction data of the thread D in the shared cache may include: writing the thread interaction data of the thread D in the shared cache to be empty (empty). When the next thread wants to acquire the lock, it will find that the lock interaction data is empty, so that the next thread can acquire the lock. As shown in Figure 5.
- the shared cache in this embodiment may be shared by any number of threads. For different threads occupying the lock, the locked data does not need to be migrated in the shared cache.
- thread Y also performs the process of this embodiment, that is, when receiving the lock request of thread X or before receiving the lock request of thread X, it has already received the lock request of thread Y, and thread interaction data of thread Y has occurred. Interaction with lock interaction data, the lock interaction data used for interaction with thread X in the shared cache has been overwritten with thread interaction data of thread Y (this is the same as the above "If the lock interaction data is not empty, the lock is occupied" Echoes).
- the thread interaction data of thread X is interacted with the lock interaction data through the shared cache, and the lock interaction data (lock interaction data is thread interaction data of thread Y) is read from the shared cache and released Enter the pipeline corresponding to thread X, so that thread X can obtain thread interaction data of thread Y, including the private data address and private data of thread Y, and the lock interaction data used for interaction with other threads in the shared cache is changed to thread X
- the thread interacts with data.
- Thread X can wait for the thread interaction data of thread Y to be changed (because the lock request of receiving thread X is earlier than the unlock request of receiving thread Y. When the unlock request of thread Y is received, it will find the thread Y that writes to the shared cache. The thread interaction data has been changed to the thread interaction data of thread X, which will change the thread data of thread Y, specifically the thread interaction data of thread Y).
- thread X or the executive body can (continuously or regularly) obtain the private data of thread Y through the private data address of thread Y, and then can compare (continuously or regularly) The size relationship between the private data of thread Y and the “private data of thread Y obtained by thread X”.
- the thread interaction data of thread Y will be changed, that is, the pending value of thread Y is increased by 1, then the target thread data of the lock thread is greater than the lock interaction data obtained by thread X, that is, thread Y’s thread interaction data
- the aforementioned threads A, B, C, D, E, X, and Y do not specifically refer to a certain thread, but can refer to any thread.
- the above embodiment discloses a new data processing method that can be used for lock mechanism control.
- the data exchange between threads and locks is carried out through the shared cache (because thread interaction data is written to the shared cache, the lock interaction data is from Shared cache reads, so it is equivalent to the data exchange between threads and locks in the shared cache), the lock data does not need to be migrated between the various processor cores, that is, the redundant and complex migration of lock data is avoided, so that threads and locks Data interaction is shorter, more efficient, and more efficient in thread processing.
- the thread interaction data of the lock-occupiing thread or the thread interaction data in the shared cache can be changed (the thread interaction data in the shared cache is actually the lock interaction Data), and the thread or execution subject to be locked can obtain the private data address of the lock thread through the data interaction between the lock thread and the lock, and then obtain the thread interaction data change (private data change) of the lock thread, so that the lock is occupied
- the above embodiments can speed up the execution of critical regions and reduce thread time consumption; can simplify the lock control process, improve lock control efficiency and thread processing efficiency; can reduce the occurrence of conflicts, and avoid rollbacks in the process of conflicts.
- the third embodiment of this specification provides a data processing method, and the execution subject of this embodiment can refer to the first embodiment.
- the data processing method of this embodiment includes S201 to S207.
- S201 When receiving the lock acquisition request of the first thread, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the shared cache of the processor The cache line of the lock.
- the first thread can be equivalent to the thread C in the first embodiment
- the first data can be equivalent to the thread interaction data in the first embodiment.
- the “corresponding” data can be equivalent to the lock interaction data in the first embodiment, according to the data corresponding to the first data of the first thread (that is, the first data of the read lock and the first thread of the first thread).
- Corresponding data) determining whether the lock is occupied may be equivalent to determining whether the lock is occupied according to the lock interaction data in the first embodiment.
- the "target thread data” in this embodiment is the same as the “target thread data” in the first embodiment.
- S205 When receiving the unlock request of the first thread, determine whether the first data exists in the shared cache.
- S207 If the first data does not exist in the shared cache, modify the first data of the first thread to unlock the first thread; if the first data exists in the shared cache, modify the shared cache. The first data in the cache unlocks the first thread.
- the second thread is a thread that wants to acquire a lock after the first thread, that is, receiving the lock request of the second thread is later than receiving the lock request of the first thread.
- the cache line of the lock if not, the second thread is made to acquire the lock; if so, the second thread is made to acquire the lock when the first data meets a preset condition.
- the second data is equivalent to the thread interaction data of the second thread.
- the unlock request of the second thread when the unlock request of the second thread is received, it is determined whether the second data exists in the cache line; if it does not exist, the second data of the second thread is changed so that The second thread is unlocked. And/or, when receiving the unlock request of the second thread, determine whether the second data exists in the cache line; if so, change the second data in the shared cache so that the second data The thread is unlocked.
- the data corresponding to the second data of the lock is the first data; and /Or, if the receiving of the lock request of the second thread is later than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the data in the shared cache. The data after the first data is changed.
- determining whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread includes: if the data of the lock corresponding to the first data of the first thread is empty, the The lock is not occupied; if the data corresponding to the first data of the first thread of the lock is not empty, the lock is occupied.
- the data for any data (not limited), if the data is greater than the corresponding lock data, the data meets the preset condition. Then when the target thread data of the lock thread is greater than the corresponding lock data, the target thread data of the lock thread meets the preset condition; when the first data is greater than the corresponding lock data, the first data meets the preset condition.
- the target thread data belongs to data used by the lock-holding thread to write to the shared cache.
- the shared cache is the last level cache.
- the first data includes the private data address and private data of the first thread.
- the first data is the pending value of the first thread.
- changing the first data of the first thread includes: adding 1 to the pending value of the first thread.
- the second data is the pending value of the second thread.
- changing the second data of the second thread includes: adding 1 to the pending value of the second thread.
- changing the first data in the shared cache includes: writing the first data in the cache line as empty.
- changing the second data in the shared cache includes: writing the second data in the cache line as empty.
- first thread and the second thread do not specifically refer to a certain thread, but may refer to any thread.
- This embodiment discloses a new data processing method that can be used for lock mechanism control.
- lock data reading and thread data writing are performed through the shared cache (the first data or the second data is written to and shared Cache, the lock data corresponding to it is read from the shared cache, so it is equivalent to reading and writing data in the shared cache), avoiding the redundant and complicated migration of lock data; in this embodiment, in order to unlock or unlock the lock thread To enable other threads to acquire the lock, you can change the data that occupies the lock thread (such as the first data or the second data) or change the thread data in the shared cache (such as the first data or the second data).
- Thread interaction data and the thread to be locked or the execution subject can obtain the private data address of the lock thread through the data interaction between the lock thread and the lock, and then obtain the thread data change (private data change) of the lock thread.
- the lock-occupiing thread When the lock-occupiing thread is unlocked, it does not need to monitor or take into account the data changes of other threads, and it does not need to operate on the data of other threads. It can be seen that the above embodiment can speed up the execution of the critical section and reduce thread time consumption; can simplify the lock control process and improve the lock control efficiency; can reduce the occurrence of conflicts and avoid rollbacks in the process of conflicts.
- the fourth embodiment of this specification provides a data processing device, which includes a locking module 301 and an unlocking module 303.
- the locking module 301 (or the first locking module 301) is used to perform the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor when receiving the lock request of the thread to be locked.
- the lock data determines whether the lock is occupied; and, if the lock is not occupied, the thread to be locked obtains the lock; if the lock is occupied, when the target thread data of the lock thread meets the preset condition, the thread The thread to be locked acquires the lock.
- the unlocking module 303 (or the first unlocking module 303) is configured to, when receiving an unlocking request of the lock-occupying thread, determine that the lock-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupying thread, and writes the Whether the thread data in the shared cache is changed; and, if the thread data written to the shared cache is changed during the interaction, the thread data of the lock-occupiing thread is changed to unlock the lock-occupiing thread.
- the unlocking module 303 is further configured to: when receiving an unlock request from the lock-occupiing thread, if the lock-occupying thread interacts with the lock through the shared cache before occupying the lock, write to the shared cache If the thread data has not been changed, the thread data in the shared cache is changed to unlock the lock thread.
- performing the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor includes: reading the lock data for interaction from the shared cache and putting it into the pipeline corresponding to the thread; and The thread data used for interaction is written into the cache line of the lock in the shared cache.
- determining whether the lock is occupied according to the lock data includes: if the lock data is empty, the lock is not occupied; if the lock data is not empty, the lock is occupied.
- the target thread data occupying the lock thread is greater than the lock data
- the target thread data meets a preset condition
- the target thread data belongs to thread data used by the lock-holding thread for data interaction with the lock.
- the shared cache is the last level cache.
- the thread data includes a private data address and private data of the thread.
- the private data is the pending value of the thread.
- changing the thread data of the thread includes: adding 1 to the pending value of the thread.
- changing the thread data in the shared cache includes: writing the thread data written in the shared cache during the interaction to be empty.
- the fifth embodiment of this specification provides a data processing device, which includes a locking module 401 and an unlocking module 403.
- the locking module 401 (or the second locking module 401) is used to determine whether the lock is occupied according to the data corresponding to the first data of the first thread when receiving the lock request of the first thread, and Write the first data to the cache line of the lock in the shared cache of the processor; and, if the lock is not occupied, enable the first thread to obtain the lock; if the lock is occupied, then the thread of the lock is occupied When the target thread data meets the preset condition, the first thread is made to acquire the lock.
- the unlocking module 403 (or the second unlocking module 403) is configured to determine whether the first data exists in the shared cache when receiving the unlock request of the first thread; and, if the first data does not exist, Data, the first data of the first thread is changed to unlock the first thread; and, if the first data exists, the first data in the shared cache is changed so that The first thread is unlocked.
- the locking module 401 is further configured to: when receiving a lock request from the second thread, determine whether the lock is occupied according to the data corresponding to the second data of the second thread, and to set the lock The second data is written into the cache line of the lock in the shared cache; if not, the second thread is allowed to acquire the lock; if so, when the first data meets a preset condition, the The second thread acquires the lock.
- the unlocking module 403 is further configured to: when receiving the unlocking request of the second thread, determine whether the second data exists in the cache line; if it does not exist, change the second thread The second data of the unlocking of the second thread. And/or, when receiving the unlock request of the second thread, determine whether the second data exists in the cache line; if so, change the second data in the shared cache so that the second data The thread is unlocked.
- the data corresponding to the second data of the lock is the first data; and/ Or, if the receiving of the lock request of the second thread is later than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the first data in the shared cache. The data after the data has been changed.
- determining whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread includes: if the data of the lock corresponding to the first data of the first thread is empty, the lock Unoccupied; if the data corresponding to the first data of the first thread of the lock is not empty, the lock is occupied.
- the data if the data is greater than the corresponding lock data, the data meets the preset condition. Then when the target thread data of the lock thread is greater than the corresponding lock data, the target thread data of the lock thread meets the preset condition; when the first data is greater than the corresponding lock data, the first data meets the preset condition.
- the target thread data belongs to data used by the lock-holding thread to write to the shared cache.
- the shared cache is the last level cache.
- the first data includes a private data address and private data of the first thread.
- the first data is a pending value of the first thread.
- changing the first data of the first thread includes: adding 1 to the pending value of the first thread.
- the second data is a pending value of the second thread.
- changing the second data of the second thread includes: adding 1 to the pending value of the second thread.
- changing the first data in the shared cache includes: writing the first data in the cache line as empty.
- changing the second data in the shared cache includes: writing the second data in the cache line as empty.
- a sixth embodiment of the present specification provides a data processing device, including: at least one processor; and, a memory communicatively connected to the at least one processor; wherein the memory stores the memory that can be used by the at least one processor;
- the executed instruction which is executed by the at least one processor, enables the at least one processor to: when receiving a lock request from the thread to be locked, lock data and lock data through the processor’s shared cache
- the thread data interaction of the thread determines whether the lock is occupied according to the lock data; if not, the thread to be locked obtains the lock; if so, when the target thread data of the lock thread meets the preset condition,
- the thread to be locked acquires the lock; when receiving an unlock request from the lock-occupying thread, it is determined that when the lock-occupying thread interacts with the lock through the shared cache of the processor before acquiring the lock, the data written to the shared cache is Whether the thread data is changed; if it is changed, the thread data of the lock-occupiing thread is changed to unlock the
- a seventh embodiment of the present specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores the memory that can be used by the at least one processor; An instruction executed by the at least one processor to enable the at least one processor to: upon receiving the lock request of the first thread, according to the lock corresponding to the first data of the first thread Determine whether the lock is occupied, and write the first data to the cache line of the lock in the shared cache of the processor; if not, make the first thread acquire the lock; if so, then the thread will be occupied When the target thread data of the target thread meets the preset condition, the first thread is allowed to acquire the lock; when the unlock request of the first thread is received, it is determined whether the first data exists in the shared cache; if it does not exist, then Modify the first data of the first thread to unlock the first thread; if it exists, modify the first data in the shared cache to unlock the first thread.
- the eighth embodiment of the present specification provides a computer-readable storage medium that stores computer-executable instructions.
- the computer-executable instructions are executed by a processor, the following steps are implemented:
- a lock request is made by a lock thread
- the lock data is interacted with the thread data of the thread to be locked through the shared cache of the processor, and it is determined whether the lock is occupied according to the lock data; if not, the thread to be locked is made Acquire the lock; if so, when the target thread data of the lock-occupying thread meets the preset condition, the thread to be locked is allowed to obtain the lock;
- the unlock request of the lock-occupiing thread is received, it is determined that the lock-occupying thread passes before the lock is occupied
- the shared cache of the processor interacts with the lock, whether the thread data written in the shared cache is changed; if it is changed, the thread data of the lock occupying thread is changed to unlock the lock occupying thread.
- the ninth embodiment of the present specification provides a computer-readable storage medium that stores computer-executable instructions.
- the computer-executable instructions When executed by a processor, the following steps are implemented: When a thread acquires a lock request, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the cache line of the lock in the shared cache of the processor If not, make the first thread acquire the lock; if yes, make the first thread acquire the lock when the target thread data of the lock thread meets the preset condition; receive the unlock request of the first thread When, determine whether the first data exists in the shared cache; if it does not exist, modify the first data of the first thread to unlock the first thread; if it exists, modify the first data in the shared cache. The first data of the unlocking of the first thread.
- the apparatus, equipment, non-volatile computer readable storage medium and method provided in the embodiments of this specification correspond to each other. Therefore, the apparatus, equipment, and non-volatile computer storage medium also have beneficial technical effects similar to the corresponding method.
- the beneficial technical effects of the method have been described in detail above, therefore, the beneficial technical effects of the corresponding device, equipment, and non-volatile computer storage medium will not be repeated here.
- the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
- hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
- software improvements improvements in method flow.
- the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
- Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
- a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- HDL Hardware Description Language
- ABEL Advanced Boolean Expression
- AHDL Altera Hardware DescrIP address Language
- HDCal JHDL
- Java Hardware DescrIP address Language Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Address) Language
- VHDL Very-High-Speed Integrated Circuit Hardware DescrIP Address Language
- Verilog Verilog
- the controller can be implemented in any suitable manner.
- the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
- controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, MicrochIP addresses PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic.
- controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
- the same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
- a typical implementation device is a computer.
- the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
- the embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, the embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of this specification may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
- RAM random access memory
- ROM read-only memory
- flash RAM flash memory
- Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
- the information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cartridges, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
- program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- This specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Disclosed by the embodiments of the present description are a data processing method, apparatus, device, and medium, the data processing methods comprising: upon receiving a lock request of a thread to be locked, performing interaction between lock data and the thread data of the thread to be locked by means of a shared cache of a processor, and determining according to the lock data whether the lock is occupied; if not, then causing the thread to be locked to be locked; if so, then if the target thread data of the lock thread satisfies a preset condition, enabling the thread to be locked to be locked; and/or, upon receiving an unlock request from the thread occupying the lock, determining, when the thread occupying the lock performs data interaction with the lock by means of the shared cache of the processor before occupying the lock, whether the thread data written in said shared cache has changed; if it has changed, then changing the thread data of the thread that occupies the lock, causing the lock occupying the thread to be unlocked.
Description
本申请涉及计算机技术领域,尤其涉及一种数据处理方法、装置、设备及介质。This application relates to the field of computer technology, and in particular to a data processing method, device, equipment and medium.
目前,内存采用了锁机制来控制线程的访问,例如MCS锁,且线程在获取锁时,需要与锁进行数据交换,例如需要交换pending值。现有的锁机制,线程在与锁进行数据交换时,需要先确定需要进行交换的锁数据位于哪一级缓存,例如L1(第一级缓存)、L2(第二级缓存),然后才能去相应的缓存进行数据交换。另外,在前一线程要解锁时,需要等待自身的“指针”数据(info.next数据)被下一线程填入,然后将所述下一线程的pending值进行更改,这样前一线程才可以解锁,也即下一线程才可以获得锁,从而前后两个线程被捆绑在一起,相互影响;其中一个线程出错则会影响另一个线程的获锁或解锁。由此可见,现有的锁机制过程繁杂,运行效率低,进而临界区的执行效率低,冲突更大(即线程花费时间更久)。另外,现有的一些技术,例如RTM(Restricted Transactional Memory),可以帮助优化关键区域的性能,但是其仅仅是减少临界区域的粒度,在产生冲突的时候会产生更多延迟,同时实现难度较大。At present, the memory uses a lock mechanism to control thread access, such as MCS lock, and when the thread acquires the lock, it needs to exchange data with the lock, for example, it needs to exchange the pending value. With the existing lock mechanism, when a thread exchanges data with a lock, it needs to determine which level of cache the lock data to be exchanged is located in, such as L1 (first level cache), L2 (second level cache), and then can go Corresponding cache for data exchange. In addition, when the previous thread wants to unlock, it needs to wait for its own "pointer" data (info.next data) to be filled in by the next thread, and then change the pending value of the next thread, so that the previous thread can Unlock, that is, the next thread can obtain the lock, so that the two threads before and after are bundled together and affect each other; an error in one thread will affect the lock or unlock of the other thread. It can be seen that the existing lock mechanism has a complicated process and low operating efficiency, and thus the execution efficiency of the critical section is low, and the conflict is greater (that is, the thread takes longer). In addition, some existing technologies, such as RTM (Restricted Transactional Memory), can help optimize the performance of critical regions, but they only reduce the granularity of critical regions, which will cause more delays when conflicts occur, and at the same time it is more difficult to implement .
有鉴于此,需要更有效和更高效的锁机制控制方案。In view of this, a more effective and efficient lock mechanism control scheme is needed.
发明内容Summary of the invention
本说明书实施例提供一种数据处理方法、装置、设备及介质,用以解决如何更有效和更高效地进行锁机制控制的技术问题。The embodiments of this specification provide a data processing method, device, equipment, and medium to solve the technical problem of how to more effectively and efficiently control the lock mechanism.
为解决上述技术问题,本说明书实施例是这样实现的。In order to solve the above technical problems, the embodiments of this specification are implemented in this way.
本说明书实施例提供一种数据处理方法,包括:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;若否,则使所述待获锁线程获得锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或,接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;若被更改,则更改所述占锁线程的线程数据,使所述占 锁线程解锁。The embodiment of this specification provides a data processing method, which includes: when a lock request of a thread to be locked is received, an interaction between lock data and thread data of the thread to be locked is performed through the shared cache of the processor, according to the lock data Determine whether the lock is occupied; if not, enable the thread to be locked to obtain the lock; if so, when the target thread data of the thread that occupies the lock meets a preset condition, enable the thread to be locked to obtain the lock; and/or When receiving the unlock request of the lock-occupiing thread, it is determined whether the thread data written into the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before occupying the lock; Change, then change the thread data of the lock-occupying thread to unlock the lock-occupying thread.
本说明书实施例提供一种数据处理方法,包括:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;若否,则使所述第一线程获取锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The embodiment of this specification provides a data processing method, which includes: when a lock acquisition request of a first thread is received, determining whether a lock is occupied according to data corresponding to the first data of the first thread, and determining whether the lock is occupied or not. The first data is written into the cache line of the lock in the shared cache of the processor; if not, the first thread is allowed to acquire the lock; if so, when the target thread data of the lock thread meets the preset condition, all The first thread acquires the lock; upon receiving the unlock request of the first thread, it is determined whether the first data exists in the shared cache; if it does not exist, the first data of the first thread is changed , Unlock the first thread; if it exists, change the first data in the shared cache to unlock the first thread.
本说明书实施例提供一种数据处理装置,包括:加锁模块,用于接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;以及,若锁未被占用,则使所述待获锁线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或,解锁模块,用于接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;以及,用于若在交互时写入所述共享缓存的线程数据被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The embodiment of this specification provides a data processing device, including: a lock module, which is used to perform the interaction between lock data and thread data of the thread to be locked through the shared cache of the processor when receiving the lock request of the thread to be locked , Determine whether the lock is occupied according to the lock data; and, if the lock is not occupied, make the thread to be locked obtain the lock; if the lock is occupied, when the target thread data of the lock thread meets the preset condition , Enabling the thread to be locked to obtain the lock; and/or, the unlocking module, when receiving the unlock request of the thread occupying the lock, determine that the thread occupying the lock performs data interaction with the lock through the shared cache of the processor before the thread occupying the lock When the thread data written to the shared cache is changed; and, if the thread data written to the shared cache is changed during the interaction, change the thread data of the thread that occupies the lock so that all The account lock thread is unlocked.
本说明书实施例提供一种数据处理装置,包括:加锁模块,用于接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;以及,若锁未被占用,则使所述第一线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;解锁模块,用于接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;以及,用于若不存在所述第一数据,则更改所述第一线程的所述第一数据,使所述第一线程解锁;以及,用于若存在所述第一数据,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The embodiment of this specification provides a data processing device, including: a lock module, which is used to determine whether the lock is locked according to the data corresponding to the first data of the first thread when the lock acquisition request of the first thread is received. Occupied, and write the first data into the cache line of the lock in the shared cache of the processor; and, if the lock is not occupied, enable the first thread to obtain the lock; if the lock is occupied, it will be occupied When the target thread data of the lock thread meets a preset condition, the first thread acquires the lock; the unlock module is configured to determine whether the first thread exists in the shared cache when the unlock request of the first thread is received. Data; and, if the first data does not exist, change the first data of the first thread to unlock the first thread; and, if the first data exists, then The first data in the shared cache is changed to unlock the first thread.
本说明书实施例提供一种数据处理设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;若否,则使所述待获锁线程获得锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或, 接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The embodiment of this specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , The instruction is executed by the at least one processor, so that the at least one processor can: when receiving a lock request of the thread to be locked, lock data and the thread of the thread to be locked through the shared cache of the processor For data interaction, determine whether the lock is occupied according to the lock data; if not, enable the thread to be locked to obtain the lock; if so, when the target thread data of the lock thread meets a preset condition, the thread to be locked The lock-acquiring thread obtains the lock; and/or, upon receiving the unlock request of the lock-occupying thread, when it is determined that the lock-occupying thread interacts with the lock through the shared cache of the processor before the lock-occupying thread, the data is written to the shared cache Whether the thread data of is changed; if it is changed, change the thread data of the lock-occupying thread to unlock the lock-occupying thread.
本说明书实施例提供一种数据处理设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;若否,则使所述第一线程获取锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The embodiment of this specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor , The instruction is executed by the at least one processor, so that the at least one processor can: when receiving the lock request of the first thread, determine according to the lock data corresponding to the first data of the first thread Whether the lock is occupied, and the first data is written into the cache line of the lock in the shared cache of the processor; if not, the first thread is made to acquire the lock; if it is, the target thread of the thread that occupies the lock When the data meets a preset condition, the first thread is made to acquire the lock; when the unlock request of the first thread is received, it is determined whether the first data exists in the shared cache; if it does not exist, the first data is changed. The first data of the first thread unlocks the first thread; if it exists, the first data in the shared cache is changed to unlock the first thread.
本说明书实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;若否,则使所述待获锁线程获得锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或,接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The embodiments of this specification provide a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the following steps are implemented: When a lock request is made, the interaction between the lock data and the thread data of the thread to be locked is performed through the shared cache of the processor, and it is determined whether the lock is occupied according to the lock data; if not, the thread to be locked is made to obtain the lock; If yes, when the target thread data of the lock-occupying thread meets the preset condition, the thread to be locked is allowed to obtain the lock; and/or, when the unlock request of the lock-occupying thread is received, it is determined that the lock-occupiing thread is before the When performing data interaction with the lock through the shared cache of the processor, whether the thread data written in the shared cache is changed; if it is changed, the thread data of the lock-occupying thread is changed to unlock the lock-occupying thread .
本说明书实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;若否,则使所述第一线程获取锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The embodiments of this specification provide a computer-readable storage medium that stores computer-executable instructions. When the computer-executable instructions are executed by a processor, the following steps are implemented: When a lock is requested, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the cache line of the lock in the shared cache of the processor; if not , The first thread is made to acquire the lock; if so, when the target thread data of the lock thread meets the preset condition, the first thread is made to acquire the lock; when the unlock request of the first thread is received, it is determined Whether the first data exists in the shared cache; if it does not exist, the first data of the first thread is changed to unlock the first thread; if it exists, the first data in the shared cache is changed The first data unlocks the first thread.
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:在锁机制控制过程中,数据交换都通过共享缓存进行,能够避免锁冗余复杂的迁移,加速临界区 执行,减少冲突的发生,同时在产生冲突的过程中避免产生回滚,能够简化锁控制流程,提高锁控制效率。The above-mentioned at least one technical solution adopted in the embodiment of this specification can achieve the following beneficial effects: in the lock mechanism control process, data exchange is performed through the shared cache, which can avoid the complicated migration of lock redundancy, accelerate the execution of critical regions, and reduce the occurrence of conflicts. , At the same time, it avoids rollback during the conflict, which can simplify the lock control process and improve the lock control efficiency.
为了更清楚地说明本说明书实施例中的技术方案,下面将对本说明书实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of this specification, the following will briefly introduce the drawings that need to be used in the description of the embodiments of this specification. Obviously, the drawings in the following description are only some of the descriptions in this specification. Embodiments, for those of ordinary skill in the art, without creative work, other drawings can be obtained based on these drawings.
图1是本说明书第一个实施例中的数据处理方法的流程示意图。Fig. 1 is a schematic flow chart of the data processing method in the first embodiment of this specification.
图2是本说明书第一个实施例中的数据处理方法的应用示意图。Fig. 2 is a schematic diagram of the application of the data processing method in the first embodiment of this specification.
图3是本说明书第一个实施例中的获锁过程示意图。Fig. 3 is a schematic diagram of the locking process in the first embodiment of this specification.
图4是本说明书第二个实施例中的数据处理方法的流程示意图Figure 4 is a schematic flow chart of the data processing method in the second embodiment of this specification
图5是本说明书第二个实施例中的解锁过程示意图。Fig. 5 is a schematic diagram of the unlocking process in the second embodiment of this specification.
图6是本说明书第三个实施例中的数据处理方法的流程示意图。Fig. 6 is a schematic flow chart of the data processing method in the third embodiment of this specification.
图7是本说明书第四个实施例中的数据处理装置的结构示意图。FIG. 7 is a schematic diagram of the structure of the data processing device in the fourth embodiment of this specification.
图8是本说明书第五个实施例中的数据处理装置的结构示意图。Fig. 8 is a schematic diagram of the structure of the data processing device in the fifth embodiment of this specification.
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本说明书实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the technical solutions in this specification, the following will clearly and completely describe the technical solutions in the embodiments of this specification in conjunction with the drawings in the embodiments of this specification. Obviously, the described The embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments of this specification, all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this application.
比较常用的MCS锁的运行过程为:线程有info.next(指针)和info.pending两个变量,当某线程(不妨记为线程A)要获取锁时,线程A与锁进行数据交互,读取锁数据,并将自己的数据写入锁的缓存行;根据读取到的锁数据判断锁是否被占用;若未被占用,则线程A获得锁;若被占用,由于当前占用锁的线程(简称“占锁线程”)也与锁进行过数据交互,则线程A通过读取的锁数据得到当前占锁线程的私有数据地址,进 而将自己的私有数据地址填入当前占锁线程的info.next变量中,并等待当前占锁线程更改线程A的info.pending值(占锁线程通过info.next变量得到了线程A的私有数据地址,进而可以更改线程A的数据);若线程A的info.pending值被更改,则占锁线程解锁,线程A可以获得锁。The more commonly used MCS lock operation process is: thread has two variables: info.next (pointer) and info.pending. When a thread (maybe referred to as thread A) wants to acquire a lock, thread A interacts with the lock and reads Get the lock data and write your own data into the lock cache line; judge whether the lock is occupied according to the read lock data; if it is not occupied, thread A obtains the lock; if it is occupied, because of the thread currently occupying the lock ("Lock thread" for short) also has data interaction with the lock, thread A obtains the private data address of the thread currently occupying the lock through the read lock data, and then fills its own private data address into the info of the thread currently occupying the lock .next variable, and wait for the current lock thread to change the info.pending value of thread A (the lock thread obtains the private data address of thread A through the info.next variable, and then can change the data of thread A); If the value of info.pending is changed, the lock thread is unlocked, and thread A can obtain the lock.
当某占锁线程(不妨记为线程B)需要解锁时,需要判断自己占锁前与锁进行交互时写入锁的缓存行的数据是否被更改;若否,说明无其他线程要获取锁,则清空自己占锁前与锁进行交互时写入锁的缓存行的数据,线程B解锁;若是,说明有其他线程在申请锁(其他线程获取锁会与锁数据交互,其他线程写入锁缓存行的数据会覆盖线程B写入锁缓存行的数据),则只有线程B的info.next变量被其他线程写入时,可以通过info.next确定想要获取锁的线程数据地址,进而更改想要获取锁的线程的info.pending值,从而线程B解锁,其他线程获取锁。When a lock-occupying thread (maybe denoted as thread B) needs to be unlocked, you need to determine whether the data written in the cache line of the lock has been changed when you interact with the lock before occupying the lock; if not, it means that no other threads want to acquire the lock. Then clear the data written in the cache line of the lock when interacting with the lock before occupying the lock, and thread B unlocks; if it is, it means that other threads are applying for the lock (other threads acquiring the lock will interact with the lock data, and other threads writing to the lock cache The data of the line will overwrite the data written by the thread B to the lock cache line), only when the info.next variable of thread B is written by other threads, you can use info.next to determine the data address of the thread you want to acquire the lock, and then change the thought The info.pending value of the thread that wants to acquire the lock, so that thread B is unlocked and other threads acquire the lock.
由上可知,前后两个想要获取锁的线程之间被捆绑在一起,相互影响,只有后一个线程将自身数据地址写入前一线程的info.next,且前一线程更改后一线程的info.pending值时,前一线程才能解锁,后一线程才能获取锁,其中一方出错则锁机制无法运行。It can be seen from the above that the two threads that want to acquire the lock are bundled together and affect each other. Only the latter thread writes its own data address to the info.next of the previous thread, and the previous thread changes the next thread’s When the value of info.pending is set, the former thread can unlock, and the latter thread can acquire the lock. If one of them fails, the lock mechanism cannot operate.
处理器具有私有缓存和共享缓存。其中处理器的各个处理器核读取到的数据都可以写入共享缓存,各个处理器核也都可以对共享缓存中的数据进行读取。每个处理器核都有各自的私有缓存,对任一处理器核,该处理器核的私有缓存的数据是其他处理器核无法读取的,其他处理器核也无法将数据写入该处理器核的私有缓存中。现有技术中,对任一线程,其想要获取锁而与锁进行数据交互时,会把锁数据迁移到自身对应的处理器核的某一级私有缓存中,便于下一次自己想要获取锁时与锁进行数据交互。这样一来,下个想要获取锁的线程要与锁进行数据交互,就要先向共享缓存询问确定锁数据位于哪一个处理器核的哪一级缓存,进而共享缓存向存储有锁数据的处理器核发出指令,使其将锁数据发送到“所述下个想要获取锁的线程”所对应的处理器核,所述下个想要获取锁的线程”所对应的处理器核将锁数据存储到自己的私有缓存,从而“所述下个想要获取锁的线程”在自己的私有缓存中与锁进行数据交互(同样是为了便于下一次自己想要获取锁时与锁进行数据交互)。有鉴于此,现有的锁机制过程繁杂,且锁数据需要不断迁移,从而运行效率低,临界区的执行效率低,冲突更大。The processor has a private cache and a shared cache. Among them, the data read by each processor core of the processor can be written into the shared cache, and each processor core can also read the data in the shared cache. Each processor core has its own private cache. For any processor core, the data in the private cache of the processor core cannot be read by other processor cores, and other processor cores cannot write data to the process. In the private cache of the processor core. In the prior art, for any thread, when it wants to acquire a lock and interacts with the lock, it will migrate the lock data to a certain level of private cache of its corresponding processor core, so that it is convenient for the next time it wants to acquire it. Data interaction with the lock during lock. In this way, the next thread that wants to acquire the lock wants to interact with the lock. It must first ask the shared cache to determine which level of cache of which processor core the lock data is located, and then the shared cache sends the lock data to the The processor core issues an instruction to send the lock data to the processor core corresponding to "the next thread that wants to acquire a lock", and the processor core corresponding to the next thread that wants to acquire a lock will The lock data is stored in its own private cache, so that "the next thread that wants to acquire the lock" interacts with the lock in its own private cache (also for the convenience of data exchange with the lock the next time you want to acquire the lock) Interaction). In view of this, the existing lock mechanism process is complicated, and the lock data needs to be continuously migrated, so the operation efficiency is low, the execution efficiency of the critical section is low, and the conflict is greater.
如图1所示,本说明书第一个实施例提供了一种数据处理方法。本实施例的执行主体可以是计算机或者服务器或者相应的数据处理系统或共享缓存控制单元,即执行主 体可以是多种多样的,可以根据实际情况进行设置或者变换。另外,也可以有第三方应用程序协助所述执行主体执行本实施例;例如图2所示,可以由服务器来执行本实施例中的数据处理方法,并且还可以在(用户所持有的)终端(包括但不限于手机、计算机)上安装相应的应用程序,服务器与应用程序对应,服务器与用户所持有的终端之间可以进行数据传输,通过应用程序来向用户进行页面以及信息展示或输入输出。As shown in Figure 1, the first embodiment of this specification provides a data processing method. The execution body of this embodiment may be a computer or a server or a corresponding data processing system or a shared cache control unit, that is, the execution body may be diverse and can be set or changed according to actual conditions. In addition, there may also be a third-party application program to assist the execution subject in executing this embodiment; for example, as shown in FIG. 2, the data processing method in this embodiment may be executed by the server, and the data processing method in this embodiment may also be executed in (owned by the user) The corresponding application program is installed on the terminal (including but not limited to mobile phone and computer). The server corresponds to the application program. Data can be transmitted between the server and the terminal held by the user. The application program is used to display pages and information to the user. input Output.
如图1所示,本实施例中的数据处理方法包括步骤S101至S103。As shown in Fig. 1, the data processing method in this embodiment includes steps S101 to S103.
S101:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用。S101: When receiving the lock request of the thread to be locked, perform the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor, and determine whether the lock is occupied according to the lock data.
在本实施例中,接收到任一线程(不妨记为线程C)对锁的获取请求(线程C可以称为“待获锁线程”,对锁的获取请求可以称为“获锁请求”)后,需要进行线程C的线程数据和锁数据的交互(或交换)。特别的,是通过共享缓存(可以是处理器的共享缓存)进行线程数据与锁数据的交互,即锁的数据(锁的数据可以设置为16字节或以上)是存储在共享缓存中的。其中,共享缓存可以是最后一级缓存(Last Level Cache,LLC),从而执行主体可以是最后一级缓存的控制单元。In this embodiment, a lock acquisition request from any thread (may be denoted as thread C) is received (thread C may be referred to as a "thread to be locked", and a lock acquisition request may be referred to as a "lock request") After that, the thread data of thread C and the lock data need to be interacted (or exchanged). In particular, the thread data and the lock data are interacted through the shared cache (which can be the shared cache of the processor), that is, the lock data (the lock data can be set to 16 bytes or more) is stored in the shared cache. Among them, the shared cache may be the last level cache (Last Level Cache, LLC), so the execution subject may be the control unit of the last level cache.
本实施例中,线程C携带有相应的数据,其所携带的数据包括但不限于线程C的私有数据地址和私有数据(即私有数据的内容)。私有数据可以是线程C的pending值(或标志位值),pending值初始可以默认为0,并且可以是变化的。另外,私有数据地址和私有数据都可以是8字节或以上。此外,可以为线程C设置一个初始化结构struct mcs_info{long pending;/*8字节*/}。In this embodiment, thread C carries corresponding data, and the data it carries includes but is not limited to the private data address of thread C and private data (that is, the content of the private data). The private data can be the pending value (or flag bit value) of thread C. The pending value can be initially 0 by default and can be changed. In addition, both the private data address and the private data can be 8 bytes or more. In addition, an initialization structure struct mcs_info{long pending; /*8 bytes*/} can be set for thread C.
线程C的用于与锁数据进行交互的线程数据(以下将用于与锁数据进行交互的线程数据称为“线程交互数据”)包括但不限于上述的私有数据地址和私有数据,即可以将私有数据地址和私有数据整理成线程交互数据,例如可以使用value=&info.pending<<64|info.pending来形成线程交互数据,本实施例对此不作限定。线程交互数据和用于与线程交互数据进行交互的锁数据(以下将用于与线程交互数据进行交互的锁数据称为“锁交互数据”)字节相同。锁交互数据的初始化内容可以为空。The thread data of thread C used to interact with the lock data (the thread data used to interact with the lock data is referred to as "thread interaction data") includes but is not limited to the above private data address and private data, that is, The private data address and the private data are organized into thread interaction data. For example, value=&info.pending<<64|info.pending can be used to form thread interaction data, which is not limited in this embodiment. The thread interaction data and the lock data used to interact with the thread interaction data (the lock data used to interact with the thread interaction data will be referred to as "lock interaction data" hereinafter) have the same bytes. The initial content of the lock interaction data can be empty.
通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互包括:一方面从共享缓存中读取锁交互数据,读取出的锁交互数据返回到线程C对应的流水线(即线程C所在的处理器的流水线);另一方面将线程C的线程交互数据写入共享缓存中所述锁的缓存行,即用线程C的线程交互数据覆盖共享缓存中的锁交互数据。具体的,可 以使用old=XCHG_OPT(&lock,&value)进行数据交互(old为锁交互数据),本实施例对此不作限定。线程交互数据属于线程数据,锁交互数据属于锁数据,线程交互数据与锁交互数据的交互即为线程数据与锁数据的交互。The interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor includes: on the one hand, the lock interaction data is read from the shared cache, and the read lock interaction data is returned to the pipeline corresponding to thread C (ie thread The pipeline of the processor where C is located); on the other hand, the thread interaction data of thread C is written into the cache line of the lock in the shared cache, that is, the thread interaction data of thread C is used to overwrite the lock interaction data in the shared cache. Specifically, old=XCHG_OPT(&lock,&value) can be used for data interaction (old is lock interaction data), which is not limited in this embodiment. Thread interaction data belongs to thread data, lock interaction data belongs to lock data, and the interaction between thread interaction data and lock interaction data is the interaction between thread data and lock data.
本实施例中,可以根据锁交互数据确定锁是否被占用。具体的,根据锁交互数据确定锁是否被占用可以包括:若所述锁交互数据为空(例如锁数据仍是初始化状态,未被更改;或者虽然被更改过,但又回到了空的状态),则所述锁未被占用;若所述锁交互数据不为空(说明接收待获锁线程的获锁请求前,已经发生过线程与锁通过共享缓存进行的数据交互,使“与待获锁线程交互的锁交互数据”变成“上一个与锁进行数据交互的线程的线程交互数据”),则所述锁被占用。In this embodiment, it can be determined whether the lock is occupied according to the lock interaction data. Specifically, determining whether the lock is occupied according to the lock interaction data may include: if the lock interaction data is empty (for example, the lock data is still in the initialized state and has not been changed; or although it has been changed, it returns to the empty state) , The lock is not occupied; if the lock interaction data is not empty (indicating that before receiving the lock request of the thread to be locked, the data interaction between the thread and the lock through the shared cache has occurred, so that The “lock interaction data of the lock thread interaction” becomes “the thread interaction data of the last thread interacting with the lock”), then the lock is occupied.
S103:若所述锁未被占用,则使所述待获锁线程获得锁;若所述锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁。S103: If the lock is not occupied, enable the thread to be locked to obtain the lock; if the lock is occupied, when the target thread data of the thread that occupies the lock meets a preset condition, enable the thread to be locked Get the lock.
若接收到线程C的获锁请求时,锁未被占用,则使线程C获得锁;若接收到线程C的获锁请求时,锁已被占用,则当占锁线程(即占用锁的线程)的目标线程数据满足预设条件时,使线程C获得锁,如图3所示。其中,对任一线程,该线程的目标线程数据属于“线程的用于与锁进行数据交互的线程数据”,即属于“线程的线程交互数据”。具体的,目标线程数据可以是线程的pending值(私有数据),即info.pending。本实施例中,当占锁线程的目标线程数据大于待获锁线程所得到的锁交互数据时,占锁线程的目标线程数据满足预设条件。具体的,待获锁线程所得到的锁交互数据包括占锁线程的pending值,若“占锁线程的pending值(目标线程数据)”大于“待获锁线程所得到的锁交互数据中的占锁线程的pending值”(待获锁线程所得到的锁交互数据中的占锁线程的pending值即“待获锁线程所得到的锁交互数据中的、与占锁线程的目标线程数据对应的数据”),就可以认为占锁线程的目标线程数据大于(待获锁线程所得到的)锁交互数据,就可以认为占锁线程的目标线程数据大于锁数据(锁交互数据属于锁数据)。If the lock is not occupied when the lock request of thread C is received, thread C is allowed to obtain the lock; if the lock is already occupied when the lock request of thread C is received, then the thread that occupies the lock (that is, the thread that occupies the lock) When the target thread data of) meets the preset condition, the thread C obtains the lock, as shown in Figure 3. Among them, for any thread, the target thread data of the thread belongs to the "thread data used for data interaction with the lock", that is, it belongs to the "thread interaction data of the thread". Specifically, the target thread data may be the thread's pending value (private data), that is, info.pending. In this embodiment, when the target thread data of the lock thread is greater than the lock interaction data obtained by the thread to be locked, the target thread data of the lock thread meets the preset condition. Specifically, the lock interaction data obtained by the thread to be acquired includes the pending value of the lock thread. If the "pending value of the lock thread (target thread data)" is greater than the amount of the lock interaction data obtained by the thread to be locked The pending value of the lock thread" (the pending value of the occupancy thread in the lock interaction data obtained by the lock thread to be acquired is the value of the lock interaction data obtained by the lock thread that corresponds to the target thread data of the occupant thread Data”), it can be considered that the target thread data of the lock thread is greater than the lock interaction data (obtained by the thread to be locked), and it can be considered that the target thread data of the lock thread is greater than the lock data (the lock interaction data belongs to the lock data).
当线程C获得锁后,就可以进入临界区(或关键区域)进行相应的操作,例如对全局变量进行修改,本实施例对此不作限定。After the thread C obtains the lock, it can enter the critical area (or critical area) to perform corresponding operations, such as modifying global variables, which is not limited in this embodiment.
任一线程想要获得锁时,都可以执行上述过程,通过共享缓存与锁进行数据交互,故本实施例中的共享缓存是可以是对任意多个线程共享的。对于不同的待获锁线程,锁的数据在共享缓存中无需迁移。When any thread wants to acquire the lock, it can perform the above process, and exchange data with the lock through the shared cache. Therefore, the shared cache in this embodiment may be shared by any number of threads. For different threads to be locked, the locked data does not need to be migrated in the shared cache.
如图4所示,本说明书第二个实施例提供了一种数据处理方法,包括步骤S105至 S107。As shown in Fig. 4, the second embodiment of this specification provides a data processing method, including steps S105 to S107.
S105:接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改。S105: When receiving the unlock request of the lock-occupiing thread, determine whether the thread data written in the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupiing thread.
当线程获得锁并执行完相应的操作后,会接收到线程的解锁请求。假设线程D是占锁线程,线程D从待获锁线程到占锁线程的过程中也会执行前述的S101、S103,故线程D也会与锁在共享缓存中进行数据交互。接收到线程D的解锁请求时,确定线程D占锁前通过共享缓存与锁交互数据时,写入共享缓存中的线程D的线程交互数据(写入共享缓存的线程D的线程交互数据也就是用于与线程D后的下一待获锁线程进行数据交互的锁交互数据)是否被更改。When the thread obtains the lock and performs the corresponding operation, it will receive the thread's unlock request. Assuming that thread D is a lock thread, thread D will also execute the aforementioned S101 and S103 during the process from the thread to be locked to the lock thread, so thread D will also interact with the lock in the shared cache. When receiving the unlock request of thread D, when it is determined that thread D interacts with the lock through the shared cache before occupying the lock, the thread interaction data of thread D written in the shared cache (the thread interaction data of thread D written to the shared cache is also Whether the lock interaction data used for data interaction with the next thread to be locked after thread D is changed.
S107:若写入共享缓存的线程数据被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。S107: If the thread data written into the shared cache is changed, the thread data of the lock-occupying thread is changed to unlock the lock-occupiing thread.
接收到线程D的解锁请求时,若共享缓存中被写入的线程D的线程交互数据被更改,说明在线程D之后还接收到其他线程(不妨记为线程E)的获锁请求,并发生了线程E的线程交互数据与锁交互数据的交互,使得共享缓存中的锁交互数据(即线程D的线程交互数据)被更改为线程E的线程交互数据(两个不同线程的私有数据地址不同,所以两个不同线程的线程交互数据不同)。这样,则更改线程D的线程数据,具体为更改线程D的线程交互数据,更具体为更改线程D的目标线程数据(目标线程数据同上),使作为锁拥有者的线程D解锁,线程D离开临界区,如图5所示。本实施例中,更改线程的线程交互数据可以包括:将线程的pending值加1,可以表示为私有数据地址->私有数据=私有数据地址->私有数据+1。When the unlock request of thread D is received, if the thread interaction data of thread D written in the shared cache is changed, it means that after thread D, the lock request of other threads (maybe denoted as thread E) is also received, and occurs The interaction between the thread interaction data of thread E and the lock interaction data, so that the lock interaction data in the shared cache (that is, the thread interaction data of thread D) is changed to the thread interaction data of thread E (the private data addresses of two different threads are different , So the thread interaction data of two different threads is different). In this way, the thread data of thread D is changed, specifically the thread interaction data of thread D, and more specifically the target thread data of thread D (the target thread data is the same as above), so that thread D as the lock owner is unlocked and thread D leaves The critical area is shown in Figure 5. In this embodiment, changing the thread interaction data of the thread may include: adding 1 to the pending value of the thread, which may be expressed as private data address->private data=private data address->private data+1.
接收到线程D的解锁请求时,若共享缓存中被写入的线程D的线程交互数据未被更改线程D,说明在线程D与锁数据交互到接收到线程D的解锁请求之间没有其他线程与锁进行数据交互,即没有其他线程申请锁。这样,则更改共享缓存中的线程D的线程交互数据,使所述占锁线程解锁。更改共享缓存中的线程D的线程交互数据可以包括:将共享缓存中线程D的线程交互数据写为空(清空)。当下一线程要获取锁时,会发现锁交互数据为空,从而下一线程可以获得锁。如图5所示。When receiving the unlock request of thread D, if the thread interaction data of thread D written in the shared cache is not changed thread D, it means that there is no other thread between thread D interacting with the lock data and receiving the unlock request of thread D Data interaction with the lock, that is, no other thread applies for the lock. In this way, the thread interaction data of thread D in the shared cache is changed, so that the lock-occupying thread is unlocked. Changing the thread interaction data of the thread D in the shared cache may include: writing the thread interaction data of the thread D in the shared cache to be empty (empty). When the next thread wants to acquire the lock, it will find that the lock interaction data is empty, so that the next thread can acquire the lock. As shown in Figure 5.
任一线程想要解锁时,都可以执行上述过程,通过共享缓存与锁进行数据交互,故本实施例中的共享缓存是可以是对任意多个线程共享的。对于不同的占锁线程,锁的数据在共享缓存中无需迁移。When any thread wants to unlock, it can perform the above process, and perform data interaction with the lock through the shared cache. Therefore, the shared cache in this embodiment may be shared by any number of threads. For different threads occupying the lock, the locked data does not need to be migrated in the shared cache.
下面以线程X和线程Y为例对第一和第二个实施例进行进一步说明。The first and second embodiments will be further described below by taking thread X and thread Y as examples.
假设接收线程Y的获锁请求早于接收线程X的获锁请求,当接收到线程X的获锁请求时,进行线程X的线程交互数据与锁交互数据的交互。那么:Assuming that the lock acquisition request of the receiving thread Y is earlier than the lock acquisition request of the receiving thread X, when the lock acquisition request of the thread X is received, the thread interaction data of the thread X and the lock interaction data are interacted. Then:
(1)若接收线程X的获锁请求晚于接收线程Y的解锁请求,则接收线程Y的解锁请求时,线程X尚未与锁进行数据交互,写入共享缓存的线程Y的线程交互数据未被更改,则清空写入共享缓存的线程Y的线程交互数据。这样,接收到线程X的获锁请求时,将线程X的线程交互数据与锁交互数据进行交互,从共享缓存中读取锁交互数据,且读取到的锁交互数据为空,说明锁未被占用,线程X获得锁。(1) If the lock request of receiving thread X is later than the unlock request of receiving thread Y, when receiving the unlock request of thread Y, thread X has not yet interacted with the lock, and the thread interaction data of thread Y written to the shared cache is not If it is changed, the thread interaction data of thread Y written in the shared cache is cleared. In this way, when receiving the lock request of thread X, the thread interaction data of thread X is interacted with the lock interaction data, the lock interaction data is read from the shared cache, and the read lock interaction data is empty, indicating that the lock is not Occupied, thread X acquires the lock.
(2)若接收线程X的获锁请求早于接收线程Y的解锁请求,则接收线程X的获锁请求时,锁至少被线程Y占用(线程Y为占锁线程)。由于线程Y也进行本实施例的过程,即在接收到线程X的获锁请求时或接收线程X的获锁请求前,已经接收了线程Y的获锁请求,发生了线程Y的线程交互数据与锁交互数据的交互,共享缓存中的用于与线程X交互的锁交互数据已被覆盖为线程Y的线程交互数据(这与上述的“若锁交互数据不为空,则锁被占用”相呼应)。(2) If the lock request of the receiving thread X is earlier than the unlock request of the receiving thread Y, when the lock request of the thread X is received, the lock is at least occupied by the thread Y (thread Y is the lock thread). Because thread Y also performs the process of this embodiment, that is, when receiving the lock request of thread X or before receiving the lock request of thread X, it has already received the lock request of thread Y, and thread interaction data of thread Y has occurred. Interaction with lock interaction data, the lock interaction data used for interaction with thread X in the shared cache has been overwritten with thread interaction data of thread Y (this is the same as the above "If the lock interaction data is not empty, the lock is occupied" Echoes).
接收到线程X的获锁请求时,通过共享缓存将线程X的线程交互数据与锁交互数据进行交互,从共享缓存中读取锁交互数据(锁交互数据即线程Y的线程交互数据)并放入线程X对应的流水线,从而线程X可以得到线程Y的线程交互数据,包括线程Y的私有数据地址和私有数据,并且共享缓存中的用于与其他线程交互的锁交互数据被更改为线程X的线程交互数据。When receiving the lock request of thread X, the thread interaction data of thread X is interacted with the lock interaction data through the shared cache, and the lock interaction data (lock interaction data is thread interaction data of thread Y) is read from the shared cache and released Enter the pipeline corresponding to thread X, so that thread X can obtain thread interaction data of thread Y, including the private data address and private data of thread Y, and the lock interaction data used for interaction with other threads in the shared cache is changed to thread X The thread interacts with data.
线程X可以等待线程Y的线程交互数据被更改(因为接收线程X的获锁请求早于接收线程Y的解锁请求,当接收到线程Y的解锁请求时,会发现写入共享缓存的线程Y的线程交互数据已更改为线程X的线程交互数据,从而会更改线程Y的线程数据,具体为更改线程Y的线程交互数据)。特别的,线程X可以使用while(线程Y的私有数据地址->线程Y的私有数据<=线程X获得的线程Y的私有数据)来进行等待;其中,while(线程Y的私有数据地址->线程Y的私有数据<=线程X获得的线程Y的私有数据)代表如果线程Y的私有数据小于等于“线程X所得到的线程Y的私有数据”,则线程X等待。由于能够从共享缓存读取到线程Y的私有数据地址,故线程X或执行主体可以(不断或者定时)通过线程Y的私有数据地址来获取线程Y的私有数据,进而可以(不断或者定时)比较线程Y的私有数据与“线程X所得到的线程Y的私有数据”的大小关系。Thread X can wait for the thread interaction data of thread Y to be changed (because the lock request of receiving thread X is earlier than the unlock request of receiving thread Y. When the unlock request of thread Y is received, it will find the thread Y that writes to the shared cache. The thread interaction data has been changed to the thread interaction data of thread X, which will change the thread data of thread Y, specifically the thread interaction data of thread Y). In particular, thread X can use while (private data address of thread Y -> private data of thread Y <= private data of thread Y obtained by thread X) to wait; among them, while (private data address of thread Y -> The private data of thread Y <= the private data of thread Y obtained by thread X) means that if the private data of thread Y is less than or equal to "the private data of thread Y obtained by thread X", thread X waits. Since the private data address of thread Y can be read from the shared cache, thread X or the executive body can (continuously or regularly) obtain the private data of thread Y through the private data address of thread Y, and then can compare (continuously or regularly) The size relationship between the private data of thread Y and the “private data of thread Y obtained by thread X”.
由于接收到线程Y的解锁请求时,会更改线程Y的线程交互数据,即将线程Y的pending值加1,那么占锁线程的目标线程数据大于线程X所得到的锁交互数据,即线程Y的私有数据大于“线程X获得的线程Y的私有数据”,从而一方面线程Y解锁,另一方面,while(线程Y的私有数据地址->线程Y的私有数据<=线程X获得的线程Y的私有数据)不再满足,使线程X获得锁。Because when receiving the unlock request of thread Y, the thread interaction data of thread Y will be changed, that is, the pending value of thread Y is increased by 1, then the target thread data of the lock thread is greater than the lock interaction data obtained by thread X, that is, thread Y’s thread interaction data The private data is greater than "the private data of thread Y obtained by thread X", so that on the one hand thread Y is unlocked, on the other hand, while (the private data address of thread Y -> the private data of thread Y <= thread Y obtained by thread X Private data) is no longer satisfied, so that thread X obtains the lock.
上述的线程A、B、C、D、E、X、Y不特指某一线程,可以指代任意的线程。The aforementioned threads A, B, C, D, E, X, and Y do not specifically refer to a certain thread, but can refer to any thread.
上述实施例可以单独或结合或组合使用。The above-mentioned embodiments can be used alone or in combination or in combination.
上述实施例公开了新的可用于锁机制控制的数据处理方法,在锁机制控制过程中,线程与锁的数据交换都通过共享缓存进行(由于线程交互数据是写入共享缓存,锁交互数据从共享缓存读取,故相当于线程与锁的数据交换都在共享缓存中进行),锁数据不需要在各个处理器核之间迁移,即避免了锁数据冗余复杂的迁移,从而线程与锁的数据交互耗时更短,效率更高,线程处理效率也更高。上述实施例中,为使占锁线程解锁或使其他线程能够获得锁,可以更改占锁线程的线程交互数据或更改共享缓存中的线程交互数据(共享缓存中的线程交互数据实际也就是锁交互数据),而待获锁的线程或执行主体可以通过占锁线程与锁的数据交互获得占锁线程的私有数据地址,进而获得占锁线程的线程交互数据变化(私有数据变化),这样占锁线程解锁时并不需要监测或顾及其他线程的数据变化,也不需要对其他线程的数据进行操作。可见,上述实施例能够加速临界区执行,减少线程耗时;能够简化锁控制流程,提高锁控制效率和线程处理效率;能够减少冲突的发生,同时在产生冲突的过程中避免产生回滚。The above embodiment discloses a new data processing method that can be used for lock mechanism control. In the process of lock mechanism control, the data exchange between threads and locks is carried out through the shared cache (because thread interaction data is written to the shared cache, the lock interaction data is from Shared cache reads, so it is equivalent to the data exchange between threads and locks in the shared cache), the lock data does not need to be migrated between the various processor cores, that is, the redundant and complex migration of lock data is avoided, so that threads and locks Data interaction is shorter, more efficient, and more efficient in thread processing. In the above embodiment, in order to unlock the lock-occupiing thread or enable other threads to obtain the lock, the thread interaction data of the lock-occupiing thread or the thread interaction data in the shared cache can be changed (the thread interaction data in the shared cache is actually the lock interaction Data), and the thread or execution subject to be locked can obtain the private data address of the lock thread through the data interaction between the lock thread and the lock, and then obtain the thread interaction data change (private data change) of the lock thread, so that the lock is occupied When a thread is unlocked, it does not need to monitor or take into account the data changes of other threads, and it does not need to operate on the data of other threads. It can be seen that the above embodiments can speed up the execution of critical regions and reduce thread time consumption; can simplify the lock control process, improve lock control efficiency and thread processing efficiency; can reduce the occurrence of conflicts, and avoid rollbacks in the process of conflicts.
如图6所示,本说明书第三个实施例提供了一种数据处理方法,本实施例的执行主体可以参照第一个实施例。本实施例的数据处理方法包括S201至S207。As shown in FIG. 6, the third embodiment of this specification provides a data processing method, and the execution subject of this embodiment can refer to the first embodiment. The data processing method of this embodiment includes S201 to S207.
S201:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行。S201: When receiving the lock acquisition request of the first thread, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the shared cache of the processor The cache line of the lock.
在本实施例中,第一线程可以相当于第一个实施例中的线程C,第一数据可以相当于第一个实施例中的线程交互数据,锁的“与第一线程的第一数据对应”的数据可以相当于第一个实施例中的锁交互数据,根据锁的与所述第一线程的第一数据对应的数据(即读取锁的与所述第一线程的第一数据对应的数据)确定锁是否被占用可以相当于第一个实施例中的根据锁交互数据确定锁是否被占用。In this embodiment, the first thread can be equivalent to the thread C in the first embodiment, and the first data can be equivalent to the thread interaction data in the first embodiment. The “corresponding” data can be equivalent to the lock interaction data in the first embodiment, according to the data corresponding to the first data of the first thread (that is, the first data of the read lock and the first thread of the first thread). Corresponding data) determining whether the lock is occupied may be equivalent to determining whether the lock is occupied according to the lock interaction data in the first embodiment.
S203:若所述锁未被占用,则使第一线程获取锁;若所述锁被占用,则当占锁线 程的目标线程数据满足预设条件时,使所述第一线程获取锁。S203: If the lock is not occupied, let the first thread acquire the lock; if the lock is occupied, when the target thread data of the occupied thread meets a preset condition, let the first thread acquire the lock.
具体可参见第一个实施例。For details, please refer to the first embodiment.
本实施例中的“目标线程数据”同第一个实施例中的“目标线程数据”。The "target thread data" in this embodiment is the same as the "target thread data" in the first embodiment.
S205:接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据。S205: When receiving the unlock request of the first thread, determine whether the first data exists in the shared cache.
参见第一个实施例。See the first embodiment.
S207:若共享缓存中不存在所述第一数据,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若共享缓存中存在所述第一数据,则更改共享缓存中的所述第一数据,使所述第一线程解锁。S207: If the first data does not exist in the shared cache, modify the first data of the first thread to unlock the first thread; if the first data exists in the shared cache, modify the shared cache. The first data in the cache unlocks the first thread.
参见第一个实施例。See the first embodiment.
本实施例中,第二线程为第一线程后的想要获取锁的线程,即接收第二线程的获锁请求晚于接收第一线程的获锁请求。接收到第二线程的获锁请求时,根据锁的“与所述第二线程的第二数据对应”的数据确定锁是否被占用,并将所述第二数据写入所述共享缓存中所述锁的缓存行;若否,则使所述第二线程获取所述锁;若是,则所述第一数据满足预设条件时,使所述第二线程获取所述锁。第二数据相当于第二线程的线程交互数据。In this embodiment, the second thread is a thread that wants to acquire a lock after the first thread, that is, receiving the lock request of the second thread is later than receiving the lock request of the first thread. When receiving the lock request of the second thread, determine whether the lock is occupied according to the data of the lock "corresponding to the second data of the second thread", and write the second data to the shared cache. The cache line of the lock; if not, the second thread is made to acquire the lock; if so, the second thread is made to acquire the lock when the first data meets a preset condition. The second data is equivalent to the thread interaction data of the second thread.
本实施例中,接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;若不存在,则更改所述第二线程的所述第二数据,使所述第二线程解锁。和/或,接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;若存在,则更改所述共享缓存中的第二数据,使所述第二线程解锁。In this embodiment, when the unlock request of the second thread is received, it is determined whether the second data exists in the cache line; if it does not exist, the second data of the second thread is changed so that The second thread is unlocked. And/or, when receiving the unlock request of the second thread, determine whether the second data exists in the cache line; if so, change the second data in the shared cache so that the second data The thread is unlocked.
本实施例中,若接收所述第二线程的获锁请求早于接收所述第一线程的解锁请求,则所述锁的与所述第二数据对应的数据为所述第一数据;和/或,若接收所述第二线程的获锁请求晚于接收所述第一线程的解锁请求,锁的则所述锁的与所述第二数据对应的数据为对所述共享缓存中的第一数据进行更改后的数据。In this embodiment, if the receiving of the lock request of the second thread is earlier than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the first data; and /Or, if the receiving of the lock request of the second thread is later than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the data in the shared cache. The data after the first data is changed.
本实施例中,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用包括:若锁的与所述第一线程的第一数据对应的数据为空,则所述锁未被占用;若锁的与所述第一线程的第一数据对应的数据不为空,则所述锁被占用。In this embodiment, determining whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread includes: if the data of the lock corresponding to the first data of the first thread is empty, the The lock is not occupied; if the data corresponding to the first data of the first thread of the lock is not empty, the lock is occupied.
本实施例中,对任一数据(不作限定),若该数据大于与之对应的锁数据,则该 数据满足预设条件。那么当占锁线程的目标线程数据大于与之对应的锁数据时,占锁线程的目标线程数据就满足预设条件;当第一数据大于与之对应的锁数据时,第一数据满足预设条件。In this embodiment, for any data (not limited), if the data is greater than the corresponding lock data, the data meets the preset condition. Then when the target thread data of the lock thread is greater than the corresponding lock data, the target thread data of the lock thread meets the preset condition; when the first data is greater than the corresponding lock data, the first data meets the preset condition.
本实施例中,所述目标线程数据属于占锁线程用于写入共享缓存的数据。In this embodiment, the target thread data belongs to data used by the lock-holding thread to write to the shared cache.
本实施例中,所述共享缓存为最后一级缓存。In this embodiment, the shared cache is the last level cache.
本实施例中,所述第一数据包括所述第一线程的私有数据地址和私有数据。In this embodiment, the first data includes the private data address and private data of the first thread.
本实施例中,所述第一数据为所述第一线程的pending值。In this embodiment, the first data is the pending value of the first thread.
本实施例中,更改所述第一线程的所述第一数据包括:将所述第一线程的pending值加1。In this embodiment, changing the first data of the first thread includes: adding 1 to the pending value of the first thread.
本实施例中,所述第二数据为所述第二线程的pending值。In this embodiment, the second data is the pending value of the second thread.
本实施例中,更改所述第二线程的所述第二数据包括:将所述第二线程的pending值加1。In this embodiment, changing the second data of the second thread includes: adding 1 to the pending value of the second thread.
本实施例中,更改所述共享缓存中的第一数据包括:将所述缓存行中的所述第一数据写为空。In this embodiment, changing the first data in the shared cache includes: writing the first data in the cache line as empty.
本实施例中,更改所述共享缓存中的第二数据包括:将所述缓存行中的所述第二数据写为空。In this embodiment, changing the second data in the shared cache includes: writing the second data in the cache line as empty.
本实施例中未详细说明的内容可以参照第一个实施例。For content not described in detail in this embodiment, reference may be made to the first embodiment.
本实施例中,第一线程、第二线程不特指某一线程,可以指代任意的线程。In this embodiment, the first thread and the second thread do not specifically refer to a certain thread, but may refer to any thread.
本实施例公开了新的可用于锁机制控制的数据处理方法,在锁机制控制过程中,锁数据读取以及线程数据写入都通过共享缓存进行(第一数据或第二数据是写入共享缓存,与之对应的锁数据从共享缓存读取,故相当于数据读写都在共享缓存中进行),避免了锁数据冗余复杂的迁移;本实施例中,为使占锁线程解锁或使其他线程能够获得锁,可以更改占锁线程的数据(例如第一数据或第二数据)或更改共享缓存中的线程数据(例如第一数据或第二数据,实际已变成锁用于与线程交互的数据),而待获锁的线程或执行主体可以通过占锁线程与锁的数据交互获得占锁线程的私有数据地址,进而获得占锁线程的线程数据变化(私有数据变化),这样占锁线程解锁时并不需要监测或顾及其他线程的数据变化,也不需要对其他线程的数据进行操作。可见,上述实施例能够加速临界区执行,减少线程耗时;能够简化锁控制流程,提高锁控制效率;能够减少冲突的发 生,同时在产生冲突的过程中避免产生回滚。This embodiment discloses a new data processing method that can be used for lock mechanism control. In the lock mechanism control process, lock data reading and thread data writing are performed through the shared cache (the first data or the second data is written to and shared Cache, the lock data corresponding to it is read from the shared cache, so it is equivalent to reading and writing data in the shared cache), avoiding the redundant and complicated migration of lock data; in this embodiment, in order to unlock or unlock the lock thread To enable other threads to acquire the lock, you can change the data that occupies the lock thread (such as the first data or the second data) or change the thread data in the shared cache (such as the first data or the second data). Thread interaction data), and the thread to be locked or the execution subject can obtain the private data address of the lock thread through the data interaction between the lock thread and the lock, and then obtain the thread data change (private data change) of the lock thread. When the lock-occupiing thread is unlocked, it does not need to monitor or take into account the data changes of other threads, and it does not need to operate on the data of other threads. It can be seen that the above embodiment can speed up the execution of the critical section and reduce thread time consumption; can simplify the lock control process and improve the lock control efficiency; can reduce the occurrence of conflicts and avoid rollbacks in the process of conflicts.
如图7所示,本说明书第四个实施例提供了一种数据处理装置,包括加锁模块301及解锁模块303。As shown in FIG. 7, the fourth embodiment of this specification provides a data processing device, which includes a locking module 301 and an unlocking module 303.
加锁模块301(或第一加锁模块301),用于接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;以及,若锁未被占用,则使所述待获锁线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁。The locking module 301 (or the first locking module 301) is used to perform the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor when receiving the lock request of the thread to be locked. The lock data determines whether the lock is occupied; and, if the lock is not occupied, the thread to be locked obtains the lock; if the lock is occupied, when the target thread data of the lock thread meets the preset condition, the thread The thread to be locked acquires the lock.
解锁模块303(或第一解锁模块303),用于接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;以及,用于若在交互时写入所述共享缓存的线程数据被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The unlocking module 303 (or the first unlocking module 303) is configured to, when receiving an unlocking request of the lock-occupying thread, determine that the lock-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupying thread, and writes the Whether the thread data in the shared cache is changed; and, if the thread data written to the shared cache is changed during the interaction, the thread data of the lock-occupiing thread is changed to unlock the lock-occupiing thread.
可选的,所述解锁模块303还用于:接收到占锁线程的解锁请求时,若所述占锁线程占锁前通过共享缓存与锁交互数据时,写入到所述共享缓存中的线程数据未被更改,则更改共享缓存中的所述线程数据,使所述占锁线程解锁。Optionally, the unlocking module 303 is further configured to: when receiving an unlock request from the lock-occupiing thread, if the lock-occupying thread interacts with the lock through the shared cache before occupying the lock, write to the shared cache If the thread data has not been changed, the thread data in the shared cache is changed to unlock the lock thread.
可选的,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互包括:从共享缓存中读取用于交互的锁数据并放入所述线程对应的流水线;以及,将用于交互的线程数据写入共享缓存中所述锁的缓存行。Optionally, performing the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor includes: reading the lock data for interaction from the shared cache and putting it into the pipeline corresponding to the thread; and The thread data used for interaction is written into the cache line of the lock in the shared cache.
可选的,根据所述锁数据确定锁是否被占用包括:若所述锁数据为空,则所述锁未被占用;若所述锁数据不为空,则所述锁被占用。Optionally, determining whether the lock is occupied according to the lock data includes: if the lock data is empty, the lock is not occupied; if the lock data is not empty, the lock is occupied.
可选的,当占锁线程的目标线程数据大于所述锁数据时,所述目标线程数据满足预设条件。Optionally, when the target thread data occupying the lock thread is greater than the lock data, the target thread data meets a preset condition.
可选的,所述目标线程数据属于占锁线程用于与锁进行数据交互的线程数据。Optionally, the target thread data belongs to thread data used by the lock-holding thread for data interaction with the lock.
可选的,所述共享缓存为最后一级缓存。Optionally, the shared cache is the last level cache.
可选的,所述线程数据包括所述线程的私有数据地址和私有数据。Optionally, the thread data includes a private data address and private data of the thread.
可选的,所述私有数据为所述线程的pending值。Optionally, the private data is the pending value of the thread.
可选的,更改所述线程的所述线程数据包括:将所述线程的pending值加1。Optionally, changing the thread data of the thread includes: adding 1 to the pending value of the thread.
可选的,更改所述共享缓存中的线程数据包括:将交互时写入所述共享缓存的线程数据写为空。Optionally, changing the thread data in the shared cache includes: writing the thread data written in the shared cache during the interaction to be empty.
如图8所示,本说明书第五个实施例提供了一种数据处理装置,包括加锁模块401和解锁模块403。As shown in FIG. 8, the fifth embodiment of this specification provides a data processing device, which includes a locking module 401 and an unlocking module 403.
加锁模块401(或第二加锁模块401),用于接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;以及,若锁未被占用,则使所述第一线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁。The locking module 401 (or the second locking module 401) is used to determine whether the lock is occupied according to the data corresponding to the first data of the first thread when receiving the lock request of the first thread, and Write the first data to the cache line of the lock in the shared cache of the processor; and, if the lock is not occupied, enable the first thread to obtain the lock; if the lock is occupied, then the thread of the lock is occupied When the target thread data meets the preset condition, the first thread is made to acquire the lock.
解锁模块403(或第二解锁模块403),用于接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;以及,用于若不存在所述第一数据,则更改所述第一线程的所述第一数据,使所述第一线程解锁;以及,用于若存在所述第一数据,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The unlocking module 403 (or the second unlocking module 403) is configured to determine whether the first data exists in the shared cache when receiving the unlock request of the first thread; and, if the first data does not exist, Data, the first data of the first thread is changed to unlock the first thread; and, if the first data exists, the first data in the shared cache is changed so that The first thread is unlocked.
可选的,所述加锁模块401还用于:接收到第二线程的获锁请求时,根据锁的与所述第二线程的第二数据对应的数据确定锁是否被占用,并将所述第二数据写入所述共享缓存中所述锁的缓存行;若否,则使所述第二线程获取所述锁;若是,则所述第一数据满足预设条件时,使所述第二线程获取所述锁。Optionally, the locking module 401 is further configured to: when receiving a lock request from the second thread, determine whether the lock is occupied according to the data corresponding to the second data of the second thread, and to set the lock The second data is written into the cache line of the lock in the shared cache; if not, the second thread is allowed to acquire the lock; if so, when the first data meets a preset condition, the The second thread acquires the lock.
可选的,所述解锁模块403还用于:接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;若不存在,则更改所述第二线程的所述第二数据,使所述第二线程解锁。和/或,接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;若存在,则更改所述共享缓存中的第二数据,使所述第二线程解锁。Optionally, the unlocking module 403 is further configured to: when receiving the unlocking request of the second thread, determine whether the second data exists in the cache line; if it does not exist, change the second thread The second data of the unlocking of the second thread. And/or, when receiving the unlock request of the second thread, determine whether the second data exists in the cache line; if so, change the second data in the shared cache so that the second data The thread is unlocked.
可选的,若接收所述第二线程的获锁请求早于接收所述第一线程的解锁请求,则所述锁的与所述第二数据对应的数据为所述第一数据;和/或,若接收所述第二线程的获锁请求晚于接收所述第一线程的解锁请求,锁的则所述锁的与所述第二数据对应的数据为对所述共享缓存中的第一数据进行更改后的数据。Optionally, if receiving the lock request of the second thread is earlier than receiving the unlock request of the first thread, the data corresponding to the second data of the lock is the first data; and/ Or, if the receiving of the lock request of the second thread is later than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the first data in the shared cache. The data after the data has been changed.
可选的,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用包括:若锁的与所述第一线程的第一数据对应的数据为空,则所述锁未被占用;若锁的与所述第一线程的第一数据对应的数据不为空,则所述锁被占用。Optionally, determining whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread includes: if the data of the lock corresponding to the first data of the first thread is empty, the lock Unoccupied; if the data corresponding to the first data of the first thread of the lock is not empty, the lock is occupied.
可选的,对任一数据(不作限定),若该数据大于与之对应的锁数据,则该数据满足预设条件。那么当占锁线程的目标线程数据大于与之对应的锁数据时,占锁线程的目标线程数据就满足预设条件;当第一数据大于与之对应的锁数据时,第一数据满足预 设条件。Optionally, for any data (not limited), if the data is greater than the corresponding lock data, the data meets the preset condition. Then when the target thread data of the lock thread is greater than the corresponding lock data, the target thread data of the lock thread meets the preset condition; when the first data is greater than the corresponding lock data, the first data meets the preset condition.
可选的,所述目标线程数据属于占锁线程用于写入共享缓存的数据。Optionally, the target thread data belongs to data used by the lock-holding thread to write to the shared cache.
可选的,所述共享缓存为最后一级缓存。Optionally, the shared cache is the last level cache.
可选的,所述第一数据包括所述第一线程的私有数据地址和私有数据。Optionally, the first data includes a private data address and private data of the first thread.
可选的,所述第一数据为所述第一线程的pending值。Optionally, the first data is a pending value of the first thread.
可选的,更改所述第一线程的所述第一数据包括:将所述第一线程的pending值加1。Optionally, changing the first data of the first thread includes: adding 1 to the pending value of the first thread.
可选的,所述第二数据为所述第二线程的pending值。Optionally, the second data is a pending value of the second thread.
可选的,更改所述第二线程的所述第二数据包括:将所述第二线程的pending值加1。Optionally, changing the second data of the second thread includes: adding 1 to the pending value of the second thread.
可选的,更改所述共享缓存中的第一数据包括:将所述缓存行中的所述第一数据写为空。Optionally, changing the first data in the shared cache includes: writing the first data in the cache line as empty.
可选的,更改所述共享缓存中的第二数据包括:将所述缓存行中的所述第二数据写为空。Optionally, changing the second data in the shared cache includes: writing the second data in the cache line as empty.
本说明书第六个实施例提供一种数据处理设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;若否,则使所述待获锁线程获得锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。A sixth embodiment of the present specification provides a data processing device, including: at least one processor; and, a memory communicatively connected to the at least one processor; wherein the memory stores the memory that can be used by the at least one processor; The executed instruction, which is executed by the at least one processor, enables the at least one processor to: when receiving a lock request from the thread to be locked, lock data and lock data through the processor’s shared cache The thread data interaction of the thread determines whether the lock is occupied according to the lock data; if not, the thread to be locked obtains the lock; if so, when the target thread data of the lock thread meets the preset condition, The thread to be locked acquires the lock; when receiving an unlock request from the lock-occupying thread, it is determined that when the lock-occupying thread interacts with the lock through the shared cache of the processor before acquiring the lock, the data written to the shared cache is Whether the thread data is changed; if it is changed, the thread data of the lock-occupiing thread is changed to unlock the lock-occupiing thread.
本说明书第七个实施例提供一种数据处理设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;若否,则使 所述第一线程获取锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。A seventh embodiment of the present specification provides a data processing device, including: at least one processor; and, a memory communicatively connected with the at least one processor; wherein the memory stores the memory that can be used by the at least one processor; An instruction executed by the at least one processor to enable the at least one processor to: upon receiving the lock request of the first thread, according to the lock corresponding to the first data of the first thread Determine whether the lock is occupied, and write the first data to the cache line of the lock in the shared cache of the processor; if not, make the first thread acquire the lock; if so, then the thread will be occupied When the target thread data of the target thread meets the preset condition, the first thread is allowed to acquire the lock; when the unlock request of the first thread is received, it is determined whether the first data exists in the shared cache; if it does not exist, then Modify the first data of the first thread to unlock the first thread; if it exists, modify the first data in the shared cache to unlock the first thread.
本说明书第八个实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;若否,则使所述待获锁线程获得锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The eighth embodiment of the present specification provides a computer-readable storage medium that stores computer-executable instructions. When the computer-executable instructions are executed by a processor, the following steps are implemented: When a lock request is made by a lock thread, the lock data is interacted with the thread data of the thread to be locked through the shared cache of the processor, and it is determined whether the lock is occupied according to the lock data; if not, the thread to be locked is made Acquire the lock; if so, when the target thread data of the lock-occupying thread meets the preset condition, the thread to be locked is allowed to obtain the lock; when the unlock request of the lock-occupiing thread is received, it is determined that the lock-occupying thread passes before the lock is occupied When the shared cache of the processor interacts with the lock, whether the thread data written in the shared cache is changed; if it is changed, the thread data of the lock occupying thread is changed to unlock the lock occupying thread.
本说明书第九个实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;若否,则使所述第一线程获取锁;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The ninth embodiment of the present specification provides a computer-readable storage medium that stores computer-executable instructions. When the computer-executable instructions are executed by a processor, the following steps are implemented: When a thread acquires a lock request, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the cache line of the lock in the shared cache of the processor If not, make the first thread acquire the lock; if yes, make the first thread acquire the lock when the target thread data of the lock thread meets the preset condition; receive the unlock request of the first thread When, determine whether the first data exists in the shared cache; if it does not exist, modify the first data of the first thread to unlock the first thread; if it exists, modify the first data in the shared cache. The first data of the unlocking of the first thread.
上述各实施例可以结合使用。The above embodiments can be used in combination.
上述对本说明书特定实施例进行了描述,其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,附图中描绘的过程不一定必须按照示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The specific embodiments of this specification have been described above, and other embodiments are within the scope of the appended claims. In some cases, the actions or steps described in the claims may be performed in a different order than in the embodiments and still achieve desired results. In addition, the processes depicted in the drawings do not necessarily have to be in the specific order or sequential order shown in order to achieve the desired result. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、设备、非易失性计算机可读存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device, equipment, and non-volatile computer-readable storage medium embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiments.
本说明书实施例提供的装置、设备、非易失性计算机可读存储介质与方法是对应的,因此,装置、设备、非易失性计算机存储介质也具有与对应方法类似的有益技术效果,由于上面已经对方法的有益技术效果进行了详细说明,因此,这里不再赘述对应装置、设备、非易失性计算机存储介质的有益技术效果。The apparatus, equipment, non-volatile computer readable storage medium and method provided in the embodiments of this specification correspond to each other. Therefore, the apparatus, equipment, and non-volatile computer storage medium also have beneficial technical effects similar to the corresponding method. The beneficial technical effects of the method have been described in detail above, therefore, the beneficial technical effects of the corresponding device, equipment, and non-volatile computer storage medium will not be repeated here.
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware DescrIP地址tion Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware DescrIP地址tion Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware DescrIP地址tion Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware DescrIP地址tion Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware DescrIP地址tion Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。In the 1990s, the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow). However, with the development of technology, the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure. Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)) is such an integrated circuit whose logic function is determined by the user's programming of the device. It is programmed by the designer to "integrate" a digital system on a PLD, without requiring the chip manufacturer to design and manufacture a dedicated integrated circuit chip. Moreover, nowadays, instead of manually making integrated circuit chips, this kind of programming is mostly realized by using "logic compiler" software, which is similar to the software compiler used in program development and writing, but before compilation The original code must also be written in a specific programming language, which is called Hardware Description Language (HDL), and there is not only one HDL, but many, such as ABEL (Advanced Boolean Expression) Language), AHDL (Altera Hardware DescrIP address Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware DescrIP address Language), Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Address) Language) and so on, the most commonly used at present are VHDL (Very-High-Speed Integrated Circuit Hardware DescrIP Address Language) and Verilog. It should also be clear to those skilled in the art that only a little logic programming of the method flow in the above-mentioned hardware description languages and programming into an integrated circuit can easily obtain the hardware circuit that implements the logic method flow.
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、MicrochIP地址PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步 骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。The controller can be implemented in any suitable manner. For example, the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, MicrochIP addresses PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the memory control logic. Those skilled in the art also know that, in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic. The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。The systems, devices, modules, or units explained in the above embodiments may be implemented by computer chips or entities, or implemented by products with certain functions. A typical implementation device is a computer. Specifically, the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing this specification, the functions of each unit can be implemented in the same one or more software and/or hardware.
本领域内的技术人员应明白,本说明书实施例可提供为方法、系统、或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, the embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of this specification may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。This specification is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to the embodiments of this specification. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device. The device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计 算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment. The instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。The memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带式磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology. The information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cartridges, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or equipment including a series of elements not only includes those elements, but also includes Other elements that are not explicitly listed, or they also include elements inherent to such processes, methods, commodities, or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, commodity, or equipment that includes the element.
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。This specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. This specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系 统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
以上所述仅为本说明书实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above descriptions are only examples of this specification, and are not intended to limit this application. For those skilled in the art, this application can have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in the scope of the claims of this application.
Claims (32)
- 一种数据处理方法,包括:A data processing method, including:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;When receiving a lock request of the thread to be locked, interact with the lock data and the thread data of the thread to be locked through the shared cache of the processor, and determine whether the lock is occupied according to the lock data;若否,则使所述待获锁线程获得锁;If not, enable the thread to be locked to obtain the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;If so, when the target thread data of the lock thread meets the preset condition, the thread to be locked is allowed to obtain the lock;和/或,and / or,接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;When receiving the unlock request of the lock-occupiing thread, determine whether the thread data written in the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupying thread;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。If it is changed, the thread data of the lock occupying thread is changed to unlock the lock occupying thread.
- 如权利要求1所述的方法,所述方法还包括:The method of claim 1, further comprising:接收到占锁线程的解锁请求时,若所述占锁线程占锁前通过共享缓存与锁交互数据时,写入到所述共享缓存中的线程数据未被更改,则更改共享缓存中的所述线程数据,使所述占锁线程解锁。When receiving the unlock request of the lock-occupying thread, if the thread-occupying thread interacts with the lock through the shared cache before occupying the lock, the thread data written in the shared cache is not changed, then all the data in the shared cache is changed. The thread data enables the lock-occupiing thread to be unlocked.
- 如权利要求1所述的方法,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互包括:The method according to claim 1, wherein the interaction of the lock data and the thread data of the thread to be locked through the shared cache of the processor comprises:从共享缓存中读取用于交互的锁数据并放入所述线程对应的流水线;以及,Read the lock data for interaction from the shared cache and put it into the pipeline corresponding to the thread; and,将用于交互的线程数据写入共享缓存中所述锁的缓存行。The thread data used for interaction is written into the cache line of the lock in the shared cache.
- 如权利要求1所述的方法,根据所述锁数据确定锁是否被占用包括:The method of claim 1, wherein determining whether the lock is occupied according to the lock data comprises:若所述锁数据为空,则所述锁未被占用;If the lock data is empty, the lock is not occupied;若所述锁数据不为空,则所述锁被占用。If the lock data is not empty, the lock is occupied.
- 如权利要求1所述的方法,当占锁线程的目标线程数据大于所述锁数据时,所述目标线程数据满足预设条件。5. The method according to claim 1, when the target thread data occupying the lock thread is greater than the lock data, the target thread data meets a preset condition.
- 如权利要求1至5中任一项所述的方法,所述目标线程数据属于占锁线程用于与锁进行数据交互的线程数据。5. The method according to any one of claims 1 to 5, wherein the target thread data belongs to thread data used by the lock-holding thread for data interaction with the lock.
- 如权利要求1至5中任一项所述的方法,所述共享缓存为最后一级缓存。The method according to any one of claims 1 to 5, wherein the shared cache is the last level cache.
- 如权利要求1至5中任一项所述的方法,所述线程数据包括所述线程的私有数据地址和私有数据。5. The method according to any one of claims 1 to 5, the thread data includes a private data address and private data of the thread.
- 如权利要求8所述的方法,所述私有数据为所述线程的pending值。8. The method of claim 8, wherein the private data is a pending value of the thread.
- 如权利要求9所述的方法,更改所述线程的所述线程数据包括:9. The method of claim 9, wherein changing the thread data of the thread comprises:将所述线程的pending值加1。Add 1 to the pending value of the thread.
- 如权利要求2所述的方法,更改所述共享缓存中的线程数据包括:The method of claim 2, wherein changing the thread data in the shared cache includes:将交互时写入所述共享缓存的线程数据写为空。The thread data written into the shared cache during the interaction is written as empty.
- 一种数据处理方法,包括:A data processing method, including:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;When receiving the lock request of the first thread, determine whether the lock is occupied according to the data of the lock corresponding to the first data of the first thread, and write the first data into the shared cache of the processor. Locked cache line;若否,则使所述第一线程获取锁;If not, let the first thread acquire the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;If yes, when the target thread data of the lock thread meets the preset condition, the first thread is allowed to acquire the lock;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;When receiving the unlock request of the first thread, determine whether the first data exists in the shared cache;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;If it does not exist, change the first data of the first thread to unlock the first thread;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。If it exists, change the first data in the shared cache to unlock the first thread.
- 如权利要求12所述的方法,所述方法还包括:The method of claim 12, further comprising:接收到第二线程的获锁请求时,根据锁的与所述第二线程的第二数据对应的数据确定锁是否被占用,并将所述第二数据写入所述共享缓存中所述锁的缓存行;When receiving the lock request of the second thread, determine whether the lock is occupied according to the data of the lock corresponding to the second data of the second thread, and write the second data to the lock in the shared cache. Cache line;若否,则使所述第二线程获取所述锁;If not, let the second thread acquire the lock;若是,则所述第一数据满足预设条件时,使所述第二线程获取所述锁。If so, when the first data meets a preset condition, the second thread is made to acquire the lock.
- 如权利要求13所述的方法,所述方法还包括:The method according to claim 13, further comprising:接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;When receiving the unlock request of the second thread, determine whether the second data exists in the cache line;若不存在,则更改所述第二线程的所述第二数据,使所述第二线程解锁。If it does not exist, change the second data of the second thread to unlock the second thread.和/或,and / or,接收到所述第二线程的解锁请求时,确定所述缓存行中是否存在所述第二数据;When receiving the unlock request of the second thread, determine whether the second data exists in the cache line;若存在,则更改所述共享缓存中的第二数据,使所述第二线程解锁。If it exists, change the second data in the shared cache to unlock the second thread.
- 如权利要求13所述的方法,若接收所述第二线程的获锁请求早于接收所述第一线程的解锁请求,则所述锁的与所述第二数据对应的数据为所述第一数据;The method according to claim 13, if the receiving of the lock request of the second thread is earlier than the receiving of the unlock request of the first thread, the data corresponding to the second data of the lock is the first One data;和/或,and / or,若接收所述第二线程的获锁请求晚于接收所述第一线程的解锁请求,锁的则所述锁的与所述第二数据对应的数据为对所述共享缓存中的第一数据进行更改后的数据。If receiving the lock request of the second thread is later than receiving the unlock request of the first thread, the data corresponding to the second data of the lock is the first data in the shared cache. The changed data.
- 如权利要求12至15中任一项所述的方法,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用包括:The method according to any one of claims 12 to 15, wherein determining whether the lock is occupied according to the data corresponding to the first data of the first thread of the lock comprises:若锁的与所述第一线程的第一数据对应的数据为空,则所述锁未被占用;If the data corresponding to the first data of the first thread of the lock is empty, the lock is not occupied;若锁的与所述第一线程的第一数据对应的数据不为空,则所述锁被占用。If the data corresponding to the first data of the first thread of the lock is not empty, the lock is occupied.
- 如权利要求12至15中任一项所述的方法,对任一数据,若该数据大于与之对 应的锁数据,则该数据满足预设条件。According to the method of any one of claims 12 to 15, for any data, if the data is greater than the corresponding lock data, the data meets the preset condition.
- 如权利要求12至15中任一项所述的方法,所述目标线程数据属于占锁线程用于写入共享缓存的数据。The method according to any one of claims 12 to 15, wherein the target thread data belongs to data used by the lock-holding thread to write to the shared cache.
- 如权利要求12至15中任一项所述的方法,所述共享缓存为最后一级缓存。The method according to any one of claims 12 to 15, wherein the shared cache is the last level cache.
- 如权利要求12至15中任一项所述的方法,所述第一数据包括所述第一线程的私有数据地址和私有数据。The method according to any one of claims 12 to 15, wherein the first data includes a private data address and private data of the first thread.
- 如权利要求12至15中任一项所述的方法,所述第一数据为所述第一线程的pending值。15. The method according to any one of claims 12 to 15, wherein the first data is a pending value of the first thread.
- 如权利要求21所述的方法,更改所述第一线程的所述第一数据包括:21. The method of claim 21, modifying the first data of the first thread comprises:将所述第一线程的pending值加1。Add 1 to the pending value of the first thread.
- 如权利要求13至15中任一项所述的方法,所述第二数据为所述第二线程的pending值。15. The method according to any one of claims 13 to 15, wherein the second data is a pending value of the second thread.
- 如权利要求23所述的方法,更改所述第二线程的所述第二数据包括:The method of claim 23, wherein modifying the second data of the second thread comprises:将所述第二线程的pending值加1。Add 1 to the pending value of the second thread.
- 如权利要求12至15中任一项所述的方法,更改所述共享缓存中的第一数据包括:The method according to any one of claims 12 to 15, wherein modifying the first data in the shared cache includes:将所述缓存行中的所述第一数据写为空。Write the first data in the cache line as empty.
- 如权利要求14所述的方法,更改所述共享缓存中的第二数据包括:The method of claim 14, wherein modifying the second data in the shared cache comprises:将所述缓存行中的所述第二数据写为空。Write the second data in the cache line as empty.
- 一种数据处理装置,包括:A data processing device includes:加锁模块,用于接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;以及,若锁未被占用,则使所述待获锁线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;The locking module is configured to interact with the lock data and thread data of the thread to be locked through the shared cache of the processor when receiving the lock request of the thread to be locked, and determine whether the lock is occupied according to the lock data; and If the lock is not occupied, the thread to be locked is allowed to obtain the lock; if the lock is occupied, when the target thread data of the lock-occupiing thread meets the preset condition, the thread to be locked is allowed to obtain the lock;和/或,and / or,解锁模块,用于接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;以及,用于若在交互时写入所述共享缓存的线程数据被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。The unlocking module is used to determine whether the thread data written into the shared cache is used for data interaction with the lock through the processor's shared cache before the lock-occupying thread is receiving the unlocking request of the lock-occupiing thread Change; and, if the thread data written to the shared cache is changed during the interaction, change the thread data of the lock-occupying thread to unlock the lock-occupying thread.
- 一种数据处理装置,包括:A data processing device includes:加锁模块,用于接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数 据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;以及,若锁未被占用,则使所述第一线程获得锁;若锁被占用,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;The locking module is used to determine whether the lock is occupied according to the data corresponding to the first data of the first thread when receiving the lock request of the first thread, and write the first data to the processor The cache line of the lock in the shared cache; and, if the lock is not occupied, the first thread is allowed to obtain the lock; if the lock is occupied, when the target thread data of the lock-occupying thread meets the preset condition, the Acquiring the lock by the first thread;解锁模块,用于接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;以及,用于若不存在所述第一数据,则更改所述第一线程的所述第一数据,使所述第一线程解锁;以及,用于若存在所述第一数据,则更改共享缓存中的所述第一数据,使所述第一线程解锁。The unlocking module is configured to determine whether the first data exists in the shared cache when receiving the unlock request of the first thread; and, if the first data does not exist, change the first data. The first data of the thread unlocks the first thread; and, if the first data exists, the first data in the shared cache is changed to unlock the first thread.
- 一种数据处理设备,包括:A data processing device, including:至少一个处理器;以及,At least one processor; and,与所述至少一个处理器通信连接的存储器;A memory connected in communication with the at least one processor;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:Wherein, the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;When receiving the lock request of the thread to be locked, perform the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor, and determine whether the lock is occupied according to the lock data;若否,则使所述待获锁线程获得锁;If not, enable the thread to be locked to obtain the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或,If so, when the target thread data of the lock thread meets the preset condition, the thread to be locked is allowed to obtain the lock; and/or,接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;When receiving the unlock request of the lock-occupiing thread, determine whether the thread data written in the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupiing thread seizes the lock;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。If it is changed, the thread data of the lock occupying thread is changed to unlock the lock occupying thread.
- 一种数据处理设备,包括:A data processing device, including:至少一个处理器;以及,At least one processor; and,与所述至少一个处理器通信连接的存储器;A memory connected in communication with the at least one processor;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够:Wherein, the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;When receiving the lock acquisition request of the first thread, determine whether the lock is occupied according to the data corresponding to the first data of the first thread, and write the first data into the shared cache of the processor. Locked cache line;若否,则使所述第一线程获取锁;If not, let the first thread acquire the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;If yes, when the target thread data of the lock thread meets the preset condition, the first thread is allowed to acquire the lock;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;When receiving the unlock request of the first thread, determine whether the first data exists in the shared cache;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;If it does not exist, change the first data of the first thread to unlock the first thread;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。If it exists, change the first data in the shared cache to unlock the first thread.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:A computer-readable storage medium that stores computer-executable instructions. When the computer-executable instructions are executed by a processor, the following steps are implemented:接收到待获锁线程的获锁请求时,通过处理器的共享缓存进行锁数据和待获锁线程的线程数据的交互,根据所述锁数据确定锁是否被占用;When receiving the lock request of the thread to be locked, perform the interaction between the lock data and the thread data of the thread to be locked through the shared cache of the processor, and determine whether the lock is occupied according to the lock data;若否,则使所述待获锁线程获得锁;If not, enable the thread to be locked to obtain the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述待获锁线程获得锁;和/或,If so, when the target thread data of the lock thread meets the preset condition, the thread to be locked is allowed to obtain the lock; and/or,接收到占锁线程的解锁请求时,确定所述占锁线程占锁前通过处理器的共享缓存与锁进行数据交互时,写入到所述共享缓存中的线程数据是否被更改;When receiving the unlock request of the lock-occupiing thread, determine whether the thread data written in the shared cache is changed when the thread-occupying thread performs data interaction with the lock through the shared cache of the processor before the lock-occupiing thread seizes the lock;若被更改,则更改所述占锁线程的线程数据,使所述占锁线程解锁。If it is changed, the thread data of the lock occupying thread is changed to unlock the lock occupying thread.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现如下的步骤:A computer-readable storage medium that stores computer-executable instructions. When the computer-executable instructions are executed by a processor, the following steps are implemented:接收到第一线程的获锁请求时,根据锁的与所述第一线程的第一数据对应的数据确定锁是否被占用,并将所述第一数据写入处理器的共享缓存中所述锁的缓存行;When receiving the lock acquisition request of the first thread, determine whether the lock is occupied according to the data corresponding to the first data of the first thread, and write the first data into the shared cache of the processor. Locked cache line;若否,则使所述第一线程获取锁;If not, let the first thread acquire the lock;若是,则当占锁线程的目标线程数据满足预设条件时,使所述第一线程获取锁;If yes, when the target thread data of the lock thread meets the preset condition, the first thread is allowed to acquire the lock;接收到所述第一线程的解锁请求时,确定所述共享缓存中是否存在所述第一数据;When receiving the unlock request of the first thread, determine whether the first data exists in the shared cache;若不存在,则更改所述第一线程的所述第一数据,使所述第一线程解锁;If it does not exist, change the first data of the first thread to unlock the first thread;若存在,则更改共享缓存中的所述第一数据,使所述第一线程解锁。If it exists, change the first data in the shared cache to unlock the first thread.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911043428.3 | 2019-10-30 | ||
CN201911043428.3A CN110781016B (en) | 2019-10-30 | 2019-10-30 | Data processing method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021082665A1 true WO2021082665A1 (en) | 2021-05-06 |
Family
ID=69387643
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/110753 WO2021082665A1 (en) | 2019-10-30 | 2020-08-24 | Data processing method, apparatus, device, and medium |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN110781016B (en) |
WO (1) | WO2021082665A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110781016B (en) * | 2019-10-30 | 2021-04-23 | 支付宝(杭州)信息技术有限公司 | Data processing method, device, equipment and medium |
CN111385294B (en) * | 2020-03-04 | 2021-04-20 | 腾讯科技(深圳)有限公司 | Data processing method, system, computer device and storage medium |
CN112346879B (en) * | 2020-11-06 | 2023-08-11 | 网易(杭州)网络有限公司 | Process management method, device, computer equipment and storage medium |
CN116860436A (en) * | 2023-06-15 | 2023-10-10 | 重庆智铸达讯通信有限公司 | Thread data processing method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104267929A (en) * | 2014-09-30 | 2015-01-07 | 香港应用科技研究院有限公司 | Computing system and method of operating lock in same |
US20180246773A1 (en) * | 2015-09-10 | 2018-08-30 | Hewlett Packard Enterprise Development Lp | Request of an mcs lock by guests |
CN110096475A (en) * | 2019-04-26 | 2019-08-06 | 西安理工大学 | A kind of many-core processor based on mixing interconnection architecture |
CN110781016A (en) * | 2019-10-30 | 2020-02-11 | 支付宝(杭州)信息技术有限公司 | Data processing method, device, equipment and medium |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8055856B2 (en) * | 2008-03-24 | 2011-11-08 | Nvidia Corporation | Lock mechanism to enable atomic updates to shared memory |
CN101403979A (en) * | 2008-10-27 | 2009-04-08 | 成都市华为赛门铁克科技有限公司 | Locking method for self-spinning lock and computer system |
US8607239B2 (en) * | 2009-12-31 | 2013-12-10 | International Business Machines Corporation | Lock mechanism to reduce waiting of threads to access a shared resource by selectively granting access to a thread before an enqueued highest priority thread |
US8850166B2 (en) * | 2010-02-18 | 2014-09-30 | International Business Machines Corporation | Load pair disjoint facility and instruction therefore |
US8458721B2 (en) * | 2011-06-02 | 2013-06-04 | Oracle International Corporation | System and method for implementing hierarchical queue-based locks using flat combining |
US9678897B2 (en) * | 2012-12-27 | 2017-06-13 | Nvidia Corporation | Approach for context switching of lock-bit protected memory |
JP6642806B2 (en) * | 2013-10-14 | 2020-02-12 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Adaptive process for data sharing using lock invalidation and lock selection |
CN103761182A (en) * | 2013-12-26 | 2014-04-30 | 上海华为技术有限公司 | Method and device for deadlock detection |
CN104750536B (en) * | 2013-12-30 | 2018-08-21 | 华为技术有限公司 | A kind of method and apparatus realized virtual machine and examined oneself |
US9152474B2 (en) * | 2014-01-20 | 2015-10-06 | Netapp, Inc. | Context aware synchronization using context and input parameter objects associated with a mutual exclusion lock |
US9535704B2 (en) * | 2014-02-03 | 2017-01-03 | University Of Rochester | System and method to quantify digital data sharing in a multi-threaded execution |
CN104063331B (en) * | 2014-07-03 | 2017-04-12 | 龙芯中科技术有限公司 | Processor, shared storage region access method and lock manager |
CN108319496B (en) * | 2017-01-18 | 2022-03-04 | 阿里巴巴集团控股有限公司 | Resource access method, service server, distributed system and storage medium |
CN108932172B (en) * | 2018-06-27 | 2021-01-19 | 西安交通大学 | Fine-grained shared memory communication synchronization method based on OpenMP/MPI mixed parallel CFD calculation |
CN109271260A (en) * | 2018-08-28 | 2019-01-25 | 百度在线网络技术(北京)有限公司 | Critical zone locking method, device, terminal and storage medium |
CN109614220B (en) * | 2018-10-26 | 2020-06-30 | 阿里巴巴集团控股有限公司 | Multi-core system processor and data updating method |
-
2019
- 2019-10-30 CN CN201911043428.3A patent/CN110781016B/en active Active
- 2019-10-30 CN CN202110377343.XA patent/CN112905365B/en active Active
-
2020
- 2020-08-24 WO PCT/CN2020/110753 patent/WO2021082665A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104267929A (en) * | 2014-09-30 | 2015-01-07 | 香港应用科技研究院有限公司 | Computing system and method of operating lock in same |
US20180246773A1 (en) * | 2015-09-10 | 2018-08-30 | Hewlett Packard Enterprise Development Lp | Request of an mcs lock by guests |
CN110096475A (en) * | 2019-04-26 | 2019-08-06 | 西安理工大学 | A kind of many-core processor based on mixing interconnection architecture |
CN110781016A (en) * | 2019-10-30 | 2020-02-11 | 支付宝(杭州)信息技术有限公司 | Data processing method, device, equipment and medium |
Non-Patent Citations (1)
Title |
---|
FU ZHIJIE , ZHOU QUNBIAO: "Realization of the MCS Spinlock As the Linux Kernel Module", MICROCOMPUTER APPLICATIONS, vol. 30, no. 7, 15 July 2009 (2009-07-15), pages 55 - 59, XP055808576, ISSN: 2095-347X * |
Also Published As
Publication number | Publication date |
---|---|
CN110781016B (en) | 2021-04-23 |
CN110781016A (en) | 2020-02-11 |
CN112905365A (en) | 2021-06-04 |
CN112905365B (en) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021082665A1 (en) | Data processing method, apparatus, device, and medium | |
KR102268722B1 (en) | Data access apparatus and operating method thereof | |
CN110008262B (en) | Data export method and device | |
US9448938B2 (en) | Cache coherence protocol for persistent memories | |
US11106795B2 (en) | Method and apparatus for updating shared data in a multi-core processor environment | |
TWI537831B (en) | Multi-core processor,method to perform process switching,method to secure a memory block, apparatus to enable transactional processing using a multi core device and method to perform memory transactional processing | |
US20150089156A1 (en) | Atomic Memory Update Unit & Methods | |
KR20130010442A (en) | Virtual gpu | |
CN110737608B (en) | Data operation method, device and system | |
JP6704623B2 (en) | Memory system operating method, memory system, and memory controller | |
CN108549562A (en) | A kind of method and device of image load | |
US9367478B2 (en) | Controlling direct memory access page mappings | |
US11880925B2 (en) | Atomic memory update unit and methods | |
US20130326180A1 (en) | Mechanism for optimized intra-die inter-nodelet messaging communication | |
US20220318012A1 (en) | Processing-in-memory concurrent processing system and method | |
US8972693B2 (en) | Hardware managed allocation and deallocation evaluation circuit | |
US9250976B2 (en) | Tiered locking of resources | |
CN107645541B (en) | Data storage method and device and server | |
KR102395066B1 (en) | Apparatus for data access and operating method thereof | |
US9251101B2 (en) | Bitmap locking using a nodal lock | |
US10922232B1 (en) | Using cache memory as RAM with external access support | |
US20240345881A1 (en) | Memory management in a multi-processor environment | |
CN111580748B (en) | Apparatus and method for virtualized data management in a very large scale environment | |
WO2015004570A1 (en) | Method and system for implementing a dynamic array data structure in a cache line | |
CN116204584A (en) | Method and device for writing log, readable storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20882478 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20882478 Country of ref document: EP Kind code of ref document: A1 |