CN108363625A - A kind of no locking wire journey orderly controls the method, apparatus and server of storage information - Google Patents
A kind of no locking wire journey orderly controls the method, apparatus and server of storage information Download PDFInfo
- Publication number
- CN108363625A CN108363625A CN201810146120.0A CN201810146120A CN108363625A CN 108363625 A CN108363625 A CN 108363625A CN 201810146120 A CN201810146120 A CN 201810146120A CN 108363625 A CN108363625 A CN 108363625A
- Authority
- CN
- China
- Prior art keywords
- memory node
- thread
- vernier
- state
- current data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/522—Barrier synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Abstract
The present invention relates to technical field of data storage, method, apparatus and server that a kind of no locking wire journey orderly controls storage information are provided.The method includes:According to the non-published state for sharing memory node in round-robin queue, the focus thread at least two first threads calls CAS instruction during being stored in current data to shared memory node;At the end of current data is stored in, non-published status modifier is issued state, and issued state is used to indicate the second thread and current data can be read;For shared memory node of the second thread distribution in issued state;According to the consistency of storage location and CAS instruction Central Plains vernier of the shared memory node in round-robin queue, former vernier is updated with unidirectional amount, focus thread locked is made to share memory node.In this way, thread non-random access data can be controlled, the overhead for overcoming the when of controlling storage queue to generate improves the concurrent efficiency of thread.
Description
Technical field
The present invention relates to technical field of data storage more particularly to a kind of no locking wire journey, and storage letter is controlled in round-robin queue
The method and apparatus of breath.
Background technology
In the related art, in order to avoid the thread of application or operating system reads while write storage number from storage queue
According to and cause data collision, controlled by the way of being locked to thread its it is serial execute read-write operation, due to the line in holder lock
After journey release lock, the thread of hang-up could be locked stores data with serial read-write, limits the concurrent efficiency of storage queue, so
Using no considerably higher effect of locking wire journey mode.
In order to make no locking wire journey orderly read and write storage data, obstruction or non-blocking algorithm are shown for storage team, but all must
It must ensure the same memory node of concurrent thread synchronization of access(For ease of description, shared memory node is called in the following text)To maintain storage team
The integrality of row, passes through delay queue respectively(Such as:It generates and examines additional increased unique encodings)Or limitation multithreading exists
Finite steps(Such as:Increase dummy node and control it to go out to fall in lines)The interior operation completed to sharing memory node in storage queue, from
And increase overhead.
When in some cases, as needed frequently to read and write memory node in storage queue compared with multithreading on the server,
Overhead outstanding can also reduce concurrent efficiency.
Invention content
In view of this, the present invention provides a kind of, solve how on the basis of overcoming overhead, to no locking wire journey from depositing
The orderly reading writing information of memory node is controlled in storage queue, to improve the concurrent efficiency of thread.
Specifically, the present invention is achieved through the following technical solutions:
In a first aspect, the present invention provides a kind of method that no locking wire journey controls storage information in round-robin queue, this method includes
Step in detail below:
According to the non-published state for sharing memory node in round-robin queue, the focus thread at least two first threads is to altogether
CAS instruction is called during enjoying memory node deposit current data;At the end of current data is stored in, non-published status modifier is hair
Cloth state, issued state is used to indicate can be read current data with the second thread;For the distribution of the second thread in issued state
Shared memory node;According to the consistency of storage location and CAS instruction Central Plains vernier of the shared memory node in round-robin queue,
Former vernier is updated with unidirectional amount, shares memory node for focus thread locked.
Optionally, when the focus thread starts deposit storage data, focus thread sends notification information, notification information
Middle carry is numbered for the storage of unique mark storage location;
CAS instruction is called according to notification information.
Optionally, the current data component read from shared memory node based on the second thread, issued state are switched to
Non-published state.
Optionally, when current data component is less than current data total amount, issued state switches in non-published state
Exclusive state, exclusive state is used to indicate first thread can renew data after current data into shared memory node;
When current data component is equal to current data total amount, issued state switches to the idle state in non-published state.
Optionally, when storage location is consistent with former vernier, former vernier is replaced with new vernier, new vernier be equal to former vernier and
The sum of unidirectional amount.
Optionally, when shared memory node is tail node in round-robin queue, unidirectional amount is revised as negative sense from positive vector
Amount;To update former vernier equal to the new vernier of the sum of former vernier and negative vector.
Optionally, when shared memory node is tail node in round-robin queue, new vernier is set as in round-robin queue first
The corresponding storage location of node.
Optionally, when storage location and inconsistent former vernier, it is current to be used to indicate stopping deposit to focus thread feedback
The notification information of data.
Optionally, it is that focus thread feedback is used to indicate the notification information for renewing current data.
Second aspect is based on identical design, and the present invention also provides a kind of no locking wire journeys, and storage is controlled in round-robin queue
The device of information, the device include unit in detail below:
Instruction calls unit, for the non-published state according to shared memory node in round-robin queue, at least two First Lines
Focus thread in journey calls CAS instruction during being stored in current data to shared memory node;
Status modifier unit, at the end of current data deposit, non-published status modifier to be issued state, issued state use
Current data can be read with the second thread in instruction;
Node allocation unit, the memory node for being in issued state for the distribution of the second thread;
Vernier updating unit, for according to shared storage location and CAS instruction Central Plains vernier of the memory node in round-robin queue
Consistency, former vernier is updated with unidirectional amount, focus thread locked is made to share memory node.
The third aspect, is based on identical design, and the application also provides a kind of EDrop order server, including memory, processor
And the computer program that can be run on a memory and on a processor is stored, processor realizes following step when executing above procedure
Suddenly, which includes any method and step of first aspect.
Fourth aspect, is based on identical design, and the application also provides a kind of computer storage media, computer-readable storage
Be stored in medium CAS instruction and it is at least one or one group other instruction, execute CAS instruction and other instruction when can realize with
Lower step, the step include any method and step in first aspect.
The advantageous effect that technical solution provided in an embodiment of the present invention is brought is:
Compared with the prior art, the application is in non-hair to orderly be stored in data into round-robin queue in shared memory node
Under cloth state, mutually seizes shared memory node without locking wire journey by least two first and be stored in thereto in focus thread and work as
CAS instruction is called during preceding data, former vernier is updated with unidirectionally measuring indicated direction using CAS instruction, can be focus
Thread locked shares memory node, other first threads different from focus thread are inconsistent using the former vernier of CAS instruction verification,
To which shared memory node can not be operated, other first threads is made not continue to seize shared memory node, on this basis, led to
Shared variation of the memory node from non-published state to issued state is crossed, makes first thread and the second line in conjunction with the passive method of salary distribution
Journey Serial Control shares memory node, so reduces overhead when multiple thread non-random access data, improves concurrent efficiency.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings
Attached drawing.
Fig. 1 is the flow that a kind of no locking wire journey provided in the embodiment of the present invention one controls storage information in round-robin queue
Schematic diagram;
Fig. 2 a are the schematic diagrames for thread control 2 state of memory node that the embodiment of the present invention one provides;
Fig. 2 b are the schematic diagrames of vernier movement after Fig. 2 a threads deposit data;
Fig. 2 c are the schematic diagrames that memory node 3 is converted to issued state from non-published state in Fig. 2 b;
Fig. 3 is another flow diagram for the control storage information that the embodiment of the present invention one provides;
Fig. 4 a are the schematic diagrames that the multithreading that the embodiment of the present invention one provides seizes memory node 4;
Fig. 4 b are the schematic diagrames of multi-thread concurrent non-random access data from round-robin queue in Fig. 4 a;
Fig. 4 c are the state change schematic diagrames of memory node after Fig. 4 b threads access data;
Fig. 5 is the flow signal that a kind of no locking wire journey provided by Embodiment 2 of the present invention controls storage information in round-robin queue
Figure;
Another flow diagram of Fig. 6 control storage information provided by Embodiment 2 of the present invention;
Fig. 7 a to 7c are the schematic diagrames of memory node state change in the storage queue that embodiment two provides;
Fig. 8 is that the embodiment of the present invention three provides a kind of no locking wire journey and control the schematic device for storing information in round-robin queue;
Fig. 9 is that the embodiment of the present invention three provides another device signal for controlling storage information in round-robin queue without locking wire journey
Figure;
Figure 10 provides a kind of structural schematic diagram for server that the embodiment of the present invention four provides.
Specific implementation mode
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Explanation of technical terms
Round-robin queue refers to linking different memory nodes in a manner of single linked list and measuring indicated side to unidirectional for focus thread
The storage queue of memory node is shared to control.
Vernier refers to share the storage address of memory node as parameter characterization and be used to indicate memory node in cycle team
Storage location in row, so that concurrent thread seizes memory node.
Shared memory node refers to the node seized at least two concurrent threads in round-robin queue, this is accounted for so as to rob
The thread of node is stored in or reads thereto data.
Focus thread refers to the thread for accounting for shared memory node being robbed in concurrent thread and the thread can be updated in former vernier
Continue the control during seizing to sharing memory node afterwards.
Embodiment one
In some cases, such as distributed clients are based on user and buy commodity behavior centralized production pay invoice, and will branch
It pays order and is sent to server, each pay invoice of server process needs server timely to improve customer consumption experience
Consumption data in pay invoice is stored, client feedback is given.At this point, the concurrent no locking wire journey of storage consumption data can be robbed frequently
Shared memory node is accounted for, memory node also can frequently come in and go out from storage queue and arrange, and correspondingly, multithreading reads and disappears in server
Expense data can also increase the frequency for and falling in lines, and increase dummy node at this time in order to consider the integrality of data structure in storage queue
Or identified with additional, concurrent efficiency can be reduced.
In order to improve the concurrent efficiency of thread under said circumstances, as shown in Figure 1, a kind of no locking wire journey of present invention offer is being followed
The flow chart of the method for control storage information, this method include step in detail below in ring queue:
Step S110:According to the non-published state for sharing memory node in round-robin queue, the coke at least two first threads
Point thread calls CAS instruction during being stored in current data to shared memory node;
In the present embodiment, round-robin queue includes multiple memory nodes, with whether there is or not " publication " character strings to distinguish each storage section
The issued state and non-published state of point, wherein allow thread to read wherein only in the memory node in issued state
Data are stored, and thread is forbidden to be stored in storage data thereto.
As shown in Figure 2 a, round-robin queue 200 include memory node 1-4, wherein be stored in memory node 2 publication and its
He does not store memory node, and memory node 1,3 and 4 is non-published state at this time and memory node 2 is issued state, parallel
Thread thread 1 can read storage data a from memory node 2, and thread 2 is forbidden to be stored in storage into memory node 2
Data so can access data simultaneously to avoid different threads in same memory node.
The thread that current data is stored in into memory node is first thread, and the thread that data are read from memory node is
Second thread, first thread and the second thread can concurrent pull-up, therefore for thread pull-up randomness, both of the aforesaid thread
It is parallel thread, but for the sharing of access storage data in the same memory node under issued state, first thread
It is serial thread with the second thread.
Multiple parallel first threads can freely seize same memory node(Call shared memory node in the following text), with Xiang Feifa
It is stored in current data in shared memory node under cloth state, also can multiple First Lines be dispatched by the scheduling thread in operating system
Journey, during formerly robbing the focus thread deposit current data for accounting for shared memory node, calling compares and replaces(CAS)Instruction,
And according to the homogeneity between the current storage location of instruction Central Plains vernier and shared memory node in round-robin queue, with list
Vector changes the original vernier, and unidirectional amount can serve to indicate that the former vernier side that first node is moved to tail node from round-robin queue
To.
As shown in Figure 2 b, when shared memory node 3 is non-published state, two first threads are seized simultaneously, First Line
Journey 2, which is robbed, accounts for that current data a3 can be stored in into shared memory node 3, and first thread 1 forbids being stored in data thereto, realizes
Be first thread 2 start be stored in data when call CAS instruction, compare in CAS instruction storage subscript 3 and original vernier 3 it is consistent,
Former vernier is revised as new vernier 4, new vernier 4 is equal to former vernier 3 plus unidirectional amount 1, other first threads 1 continue to fight for sharing
Memory node 3, updated former vernier 4 and storage location 3 are inconsistent in CAS instruction, then do not execute corresponding with first thread 1
Data are stored, focus thread locked is made to share memory node.
During focus thread is stored in current data in shared memory node, by different from least two first threads
Scheduling thread forbids other first threads to fight for focus memory node, may further ensure that first thread locks memory node.
Step S120:At the end of current data is stored in, non-published status modifier is issued state.
For non-random access current data in shared memory node, by the variation of issued state can make first thread and
Second thread is to the memory node Time-sharing control, and to reduce the data collision between accessing, first thread is in non-published shape
The memory node of state is stored in end of data, and the non-published state is replaced with issued state immediately, and the second thread is from wherein reading
The data of deposit, issued state is used to indicate at this time can be read aforementioned current data with the second thread of first thread serially, when
Preceding data needs could be read after the second thread wakening, with reference to shown in figure 2b, wherein the former updated new vernier of vernier turns
It is changed to former vernier, the memory node 4 being directed toward is shared memory node, and first thread 1 seizes shared memory node 4.
Step S130:For shared memory node of the second thread distribution in issued state.
It is being after issued state and before the second thread wakening by non-published status modifier, is being the by scheduling thread
Shared memory node of the two threads distribution in issued state;Or it is that shared memory node is transformed into hair by non-published state
The second thread is waken up when cloth state immediately, it is made to read the storage data in the memory node.
Storage configuration has multiple second threads of different priorities in thread pool, residing for priority and memory node
In descending order, each second thread and memory node are bound for storage address in round-robin queue;Alternatively, with priority and
The correspondence size of storage address generates mapping table, and the mapping table is stored in thread pool, can so ensure difference second
Thread reads order to the memory node in issued state, to reduce the conflict for reading data, those skilled in the art's root
It can be realized according to foregoing description, details are not described herein again.
Step S140:According to the one of storage location and CAS instruction Central Plains vernier of the shared memory node in round-robin queue
Cause property updates former vernier with unidirectional amount, focus thread locked is made to share memory node.
When first thread starts to be stored in current data in the shared memory node tried to be the first to it, notification information is sent,
It is carried in the notification information and shares the current storage location that memory node is located at round-robin queue, call CAS to refer to according to notification information
It enables.
Three parameters stored in CAS instruction, respectively former vernier, desired value and new vernier, notify when calling CAS instruction
The current memory address deposit desired value carried in information judges whether current memory address and former vernier are identical;If identical,
Former vernier is changed with new vernier, which is directed toward the target storage node different from former vernier direction memory node, so that its
He can execute target storage node storage operation at first thread;If differing, do not change or rollback original vernier, indicates shared and deposits
It is not corresponding to store up the memory node that node is directed toward with former vernier, forbids first thread deposit storage data at this time, so ensures first
The order of data is stored between thread.
During focus thread is stored in number into shared memory node, other first threads may will continue to seize, to
The deposit storage data in shared memory node, but focus thread first calls CAS instruction, can lock the shared storage section in advance
Point, and notification information is retransmited when other first threads calling CAS instruction, the former vernier in CAS instruction has been updated at this time, then
Other first threads can be made to stop deposit storage data.
After the former vernier of modification, other first threads freely seize new vernier and correspond to the memory node in round-robin queue,
At the same time, first thread renews current data in the corresponding memory node of former vernier, and different first threads can be parallel
Data are stored in different memory nodes, so as to reduce data collision between first thread, and are deposited for difference in round-robin queue
Node is stored up, realizes that first thread is stored in storage data parallel, improves concurrent efficiency.
As shown in Figure 2 c, at the end of first thread 2 is stored in storage data a3 in memory node corresponding with former vernier 3,
Be stored in " publication " character string in memory node 3, it made to be converted to issued state from non-published state, and transition status it
Preceding that former vernier is replaced with new vernier, the competition of first thread 1 obtains the corresponding memory node 4 of new vernier, and memory node 4 is at this time
Non-published state.
As an alternative embodiment, as shown in Figure 3.
Step S310:It is multiple when shared memory node corresponding with former vernier is in non-published state in round-robin queue
First thread seizes shared memory node and focus thread sends notification information during being stored in current data.
It vies each other between at least three first threads, competition obtains trying to be the first the of deposit data into shared memory node
One process is focus thread, which is in non-published state.
Non-published state may include idle state and exclusive state, and idle state is for characterizing in shared memory node
In the presence of storage data, exclusive state for characterize is not stored in storage data wherein, can be not stored in storage data in the state of,
Deposit " free time " character string correspondingly has existed storage data in shared memory node(Call first data in the following text)State
Under, it is stored in " exclusive " character string wherein.
Shared memory node be exclusive state or idle state when, and focus thread start be stored in data when, send
Notification information, to call CAS instruction, the time difference for so reducing calling changes former vernier as early as possible.
As shown in fig. 4 a, the corresponding shared memory node 4 of three 400 Central Plains verniers of first thread 1-3 competitions round-robin queue,
It is stored with " free time " character string for indicating that it is in idle condition in shared memory node 4.
Step S320:When the storage number for carrying shared memory node in notification information is identical as CAS instruction Central Plains vernier
When, former vernier is replaced with new vernier, new vernier is equal to the sum of former vernier and unidirectional amount.
CAS instruction, CAS instruction are called according to the notification message for carrying current storage location in aforementioned shared memory node
Expected value is revised as current storage location, and when current storage location is identical with former vernier, former trip is replaced with aforementioned new vernier
Mark, and being used to indicate to first thread feedback can the first notification message for seizing of pair memory node corresponding with new vernier;When working as
When preceding storage location is different with former vernier, former vernier is maintained, and stopping is used to indicate to shared storage to first thread feedback
The second notification message of deposit storage data in node, so that first thread can be with active perception each other to sharing memory node
The permission of operation, does not send notification information repeatedly, causes to repeat to call CAS instruction.
When focus thread receives the first notification information, current data is renewed into shared memory node.
In addition to the tail node in round-robin queue, new vernier can be on the basis of former vernier plus unidirectional amount, the list
It is 1 that vector, which is defaulted as positive vector and default step-length, when former vernier is equal to the total length of round-robin queue, characterizes what former vernier was directed toward
Shared memory node is tail node, at this time sets new vernier to the storage location of first node in round-robin queue, can so protect
Demonstrate,prove the order and consistency of memory node in round-robin queue.
As shown in Figure 4 b, first thread 3 is tried to be the first can be stored in the priority of current data a4 into shared memory node 4,
During being stored in a4, former vernier is replaced with by new vernier by CAS instruction, shares the tail end section that memory node 4 is round-robin queue 410
Former vernier is subtracted the sum 4 of memory node by point at this time, and the subscript 1 that headend node is arranged is new vernier;First thread 3 receives
When to the notification information 1 fed back by CAS instruction, then keep being stored in current data into shared memory node 4, end to be deposited
When will " free time " be revised as " issuing ";Otherwise, when receiving the second notification information, first thread 3 stops deposit data a4, and
And second thread 1 and 2 read first data from memory node 2 and 3 respectively.
Alternatively, when shared memory node is aforementioned tail node, default step-length is revised as opposite vector, and according to phase
The update of the sum of anti-vector and the storage location of shared memory node and former vernier, such as:When being designated as 4 under shared memory node,
Default step-length 1 is revised as -1, and new vernier is 4-1=3
Step S330:At the end of current data is stored in, exclusive state or idle state in non-exclusive state are revised as sending out
Cloth state.
Step S340:For shared memory node of the second thread distribution in issued state.
Each second thread is bound with memory node in round-robin queue respectively in thread pool, when from exclusive state or free time
When state is converted to issued state, storage data therein are read with the second thread of shared memory node binding.
It is read according to the second thread big between the total amount of storage data in the component for storing data and shared memory node
Issued state is converted to exclusive state or idle state by small relationship;When aforementioned component is less than total amount, with " exclusive " character
String replaces " publication " character string;When both aforementioned equal, " publication " character string is replaced with " free time " character string.
In conjunction with shown in Fig. 4 b and Fig. 4 c, it is a2-1 that the second thread 1 reads data from memory node 2, and a2-1 is less than a2, reads
" publication " is revised as " monopolizing " at the end of taking;It is a3 that second thread 2 reads data from memory node 3, will at the end of reading
" publication " is revised as " free time ".
The limit priority for sharing memory node is possessed at this time it should be noted that at least two first threads are vied each other
The current first thread of limit priority can be stored in storage data and call CAS instruction parallel, and call the primary work of CAS instruction
Change former vernier with being so that current first thread locks the shared memory node, and other first threads compete each other and
New vernier corresponds to the priority of memory node.It therefore, can be with the aforementioned non-published state of concurrent modification and former vernier, art technology
Personnel can also combine according to aforementioned embodiments, so that two processes are serial.
Embodiment two
In some cases, such as distributed clients share barrage information by communication interaction between server, one
Barrage information based on the production of user's edit action in client is sent to server, and server is based on video in other clients
The barrage information is fed back in request, keeps distributed clients shared, is concentrated in client and is played live video, read on server at this time
It takes barrage information and barrage information is inserted into live video, when interaction is frequent between server and distributed clients, no
It is also required to constantly be inserted into live video with barrage information.
In order to improve the concurrent efficiency of thread under said circumstances, followed without locking wire journey as shown in figure 5, the application provides another kind
The method of control storage information, is as follows in ring queue.
Step S510:According to the issued state for sharing memory node in round-robin queue, at least two second threads
Focus thread calls CAS instruction during reading current data into shared memory node.
In the present embodiment, focus thread is that the thread for accounting for shared memory node is robbed at least two second threads, root
According to the issued state of each memory node in cycle storage queue, scheduling thread can adjust at least two second threads
Degree, focus thread send a notification message to CPU during reading current data in shared memory node, are called according to notification message
CAS instruction, it can be replicated to current data to read current data.
Wherein, CAS instruction can be the atomicity data operation in CPU, right in an operating system compared with the existing technology
The mode that thread locks, atomicity data operation can accelerate the operation of CAS mechanism, such as:10w calculating per second, CAS in operating system
Mechanism can reach 100w calculating.
Step S520:When shared storage location of the memory node in round-robin queue is identical with CAS instruction Central Plains vernier,
Former vernier is updated with unidirectional amount, focus thread locked is made to share memory node.
According to the consistency between the storage subscript of the CAS instruction of calling and shared memory node in round-robin queue, when
When the two is identical, former vernier is updated with new vernier, new vernier is equal to the sum of former vernier and unidirectional amount, and unidirectional amount can be 1, make coke
Point thread locked shares memory node.
After the update of former vernier, focus thread and other second threads receive shared information, and focus thread is to shared storage
It resumes studies in node current data, and first thread is seized corresponding with new vernier according to the new vernier carried in shared information
Memory node.
Step S530:At the end of focus thread reads current data, issued state is revised as non-published state.
The information for establishing memory node and its state describes table, and the storage that table includes memory node is described in the information
Mapping relations between location and status information, the status information include 2 of 1 and non-published state for characterizing issued state,
As shown in table 1.
Table 1
Subscript | State |
1 | 2 |
2 | 1 |
3 | 1 |
4 | 2 |
n | 1 |
Non-published state may include idle state and exclusive state, and idle state can be characterized with 2-1 and exclusive state can be with 2-2
Characterization, can also store the mapping relations between aforementioned priority and storage address, as shown in table 2 in information describes table.
Table 2
Subscript | Thread priority | State |
1 | 1 | 2-2 |
2 | 2 | 2-2 |
3 | 3 | 2-1 |
4 | 4 | 1 |
n | N | 2-2 |
Non-published state is used to indicate first thread and is stored in first data before current data, and can be after current data
Deposit is in rear data, wherein exclusive state is used to indicate first thread renewing in rear data in current data, and idle state is used
It is shared in instruction and first data is not present in memory node, first thread can directly be stored in current data thereto.
Step S540:For shared memory node of the first thread distribution in non-published state.
First thread can monitor the state that memory node is shared in round-robin queue, listen to shared memory node from hair
Cloth state is switched to after non-published state, and the data point in rear data are stored in into shared memory node based on first thread
Amount, non-published state switch to the issued state;When rear data are less than total amount of data in shared memory node, non-published
Exclusive state in state switches to issued state, and exclusive state is used to indicate first thread and can be renewed into shared memory node
In rear data;When rear data are equal with aforementioned data total amount, the idle state in non-published state switches to issued state.
Wherein, it is to be stored in the data for sharing memory node after current data in rear data, is deposited before current data
The data for entering shared memory node are first data.
As another optional embodiment, as shown in Figure 6.
Step S610:According to the issued state for sharing memory node in round-robin queue, at least two second threads
Focus thread calls CAS instruction during reading storage data into shared memory node.
Step S620:At the end of reading current data, issued state is revised as non-published state.
Step S630:For memory node of the first thread distribution in non-published state.
Step S640:When the former vernier in storage location corresponding with shared memory node and CAS instruction is identical, newly to swim
The former vernier of mark modification, new vernier are equal to the sum of single vector sum original vernier.
Exemplary, as shown in Fig. 7 a to 7c, each memory node is idle shape after round-robin queue 710 initializes
State, and bound respectively with first thread 1-4 in thread pool 1012, the priority of first thread 1-4 reduces successively(It is directed toward to arrow
Direction increases);In multiple first threads and the second thread from accessing data procedures in round-robin queue 1011, the second concurrent line
Journey 1 and 2 seizes the memory node 1 that issued state is converted into from idle state, to be stored in into memory node 1 in first thread 1
The data are replicated after data, and first thread 1 is terminated in thread pool 1012 and pull-up first thread 5, first thread 5 it is excellent
First grade is less than first thread 4, such as Fig. 7 b;The thread for accounting for memory node 1 is robbed in second thread 1 and 2, from wherein replicate data
A1, as shown in Figure 7 c, after duplication, memory node 1 is converted to exclusive state from issued state, and will be in exclusive state
Memory node 1 is bound with thread 5.
It should be noted that accessing number from memory node to memory node and its status, thread in embodiment two
Describe with corresponding portion in embodiment one similar according to equal, those skilled in the art can be combined with each other expansion based on embodiment one, two
Specific implementation mode is opened up, details are not described herein again.
Embodiment three
Based on identical design, the embodiment of the present invention two also provides a kind of device of process resource data, which can pass through
Software realization can also be realized by way of hardware or software and hardware combining.For implemented in software, processing of the invention is surveyed
The device of data is tried as the device on a logical meaning, is by the CPU of its equipment by corresponding calculating in memory
Machine program instruction is run after reading.
A kind of no locking wire journey in a kind of illustrative embodiments of the present invention controls the dress of storage information in round-robin queue
It sets, the basic running environment of the device includes CPU, memory and other hardware, from logic level, the device 800
Logical construction as shown in figure 8, including:Instruction calls unit 810, status modifier unit 820,830 and of node allocation unit
Vernier updating unit 840.
Example one
Instruction calls unit 810, for the non-published state according to shared memory node in round-robin queue, at least two first
Focus thread calls CAS instruction during being stored in current data to shared memory node in thread;
Status modifier unit 820, at the end of current data deposit, non-published status modifier to be issued state, issues shape
State is used to indicate can be read current data with the second thread;
Node allocation unit 830, the memory node for being in issued state for the distribution of the second thread;
Vernier updating unit 840 shares storage location and CAS instruction Central Plains trip of the memory node in round-robin queue for basis
Target consistency updates former vernier with unidirectional amount, shares memory node for focus thread locked.
Example two
Instruction calls unit 910, for the issued state according to shared memory node in round-robin queue, at least two second lines
Focus thread in journey calls CAS instruction during reading current data into shared memory node;
Vernier updating unit 920, for when shared storage location and CAS instruction Central Plains vernier of the memory node in round-robin queue
When identical, former vernier is updated with unidirectional amount;
Status modifier unit 930, at the end of reading current data when focus thread, issued state is revised as being used to indicate the
One thread can be stored in the non-published state in rear data after current data;
Node allocation unit 940, the shared memory node for being in non-published state for first thread distribution.
Instruction calls unit may include message sending unit and call unit, and message sending unit is used to work as focus thread
When starting to read current data, send a notification message;Call unit is for receiving notification message and being adjusted in response to notification message
Use CAS instruction.
Vernier updating unit may include former vernier, desired value and new vernier, and desired value is taken for storing in notification message
Storage address of the shared memory node of band in round-robin queue, new vernier are equal to the sum of former vernier and unidirectional amount;It is deposited when shared
When storage address of the storage node correspondence in round-robin queue is identical with CAS instruction Central Plains vernier, former vernier is replaced with new vernier, and
And the third notice message for current data of resuming studies into shared memory node is used to indicate to focus thread feedback;When aforementioned storage
When address and former vernier differ, the 4th notice for stopping reading into shared memory node is used to indicate to focus thread feedback
Message.
Wherein, unidirectional amount is defaulted as positive vector 1, when shared memory node is the tail node in round-robin queue, Ke Yicong
Positive vector is converted to negative vector, or the new vernier of setting is the first node in storage queue.
Status modifier unit 930 may include the first modification unit and the second modification unit, and the first modification unit is for working as
When the current data of first thread deposit after deposit current data less than total amount of data in memory node is shared, by non-published state
In exclusive state be revised as issued state;Second modification unit is used to work as when the current data of first thread deposit is equal to deposit
When sharing total amount of data in memory node after preceding data, the idle state in non-published state is revised as issued state.
The function of each unit and the realization process of effect specifically refer to and correspond to step in the above method in above-mentioned apparatus
Realization process, details are not described herein.
Example IV
Refering to what is shown in Fig. 10, the embodiment of the present invention three also provides a kind of server 1000, server 1000 can be order placement service
Device and video server, including memory 1100 and the processor 1300 that is connect with memory 1100 by controlling bus 1200,
The communication interface 1400 being connected in controlling bus 1200 can be communicated with client or other servers, in memory 1100
Middle storage computer program, the computer program may operate in processor 1200, realize and implement when computer program is run
The step of described in example one and function.
Function described in the invention can be realized with hardware, software, firmware or their arbitrary combination.When using soft
When part is realized, these functions can be stored in computer-readable medium or as one on computer-readable medium or
Multiple instruction or code are transmitted.Computer-readable medium includes computer storage media and communication media, wherein communication is situated between
Matter includes convenient for transmitting any medium of computer program from a place to another place.Storage medium can be it is general or
Any usable medium that special purpose computer can access.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
With within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of protection of the invention god.
Claims (11)
1. a kind of method that no locking wire journey controls storage information in round-robin queue, which is characterized in that the method includes:
According to the non-published state for sharing memory node in round-robin queue, focus thread at least two first threads is to institute
CAS instruction is called during stating shared memory node deposit current data;
At the end of the current data is stored in, the non-published status modifier is issued state, and the issued state is for referring to
Show and the current data can be read with the second thread;
For the shared memory node of second thread distribution in the issued state;
It is consistent with CAS instruction Central Plains vernier according to storage location of the shared memory node in the round-robin queue
Property, it is that memory node is shared described in the focus thread locked with the unidirectional amount update former vernier.
2. according to the method described in claim 1, it is characterized in that, described according to the non-hair for sharing memory node in round-robin queue
Cloth state, the focus thread at least two first threads are called to during the shared memory node deposit current data
CAS instruction, including:
When the focus thread starts to be stored in the storage data, the focus thread sends notification information, the notice letter
The storage number for storage location described in unique mark is carried in breath;
The CAS instruction is called according to the notification information.
3. according to the method described in claim 1, it is characterized in that, being in the publication described for second thread distribution
After the shared memory node of state, including:
The current data component read from the shared memory node based on second thread, the issued state are cut
Shift to the non-published state.
4. according to the method described in claim 3, it is characterized in that, described saved based on second thread from the shared storage
The current data component read in point, the issued state switch to the non-published state, including:
When the current data component is less than the current data total amount, the issued state switches to the non-published state
In exclusive state, the exclusive state is used to indicate the first thread and can be renewed into the shared memory node described
Data after current data;
When the current data component is equal to the current data total amount, the issued state switches to the non-published state
In idle state.
5. according to the method described in claim 1-3, which is characterized in that according to the shared memory node in the cycle team
The consistency of storage location and CAS instruction Central Plains vernier in row is the focus with the unidirectional amount update former vernier
It is shared in memory node described in thread locked, including:
When the storage location is consistent with the original vernier, the former vernier is replaced with new vernier, the new vernier is equal to institute
State the sum of former vernier and the unidirectional amount.
6. according to the method described in claim 1-3, which is characterized in that according to the shared memory node in the cycle team
The consistency of storage location and CAS instruction Central Plains vernier in row is the focus with the unidirectional amount update former vernier
It is shared in memory node described in thread locked, including:
When the shared memory node is tail node in the round-robin queue, the unidirectional amount is revised as negative sense from positive vector
Amount;
With equal to the new vernier of the sum of the former vernier and negative vector update former vernier.
7. according to the method described in claim 1-3, which is characterized in that according to the shared memory node in the cycle team
The consistency of storage location and CAS instruction Central Plains vernier in row is the focus with the unidirectional amount update former vernier
It is shared in memory node described in thread locked, including:
When the shared memory node is tail node in the round-robin queue, it sets the new vernier to the round-robin queue
The corresponding storage location of middle first node.
8. according to the method described in claim 1-3, which is characterized in that according to the shared memory node in the cycle team
The consistency of storage location and CAS instruction Central Plains vernier in row is the focus with the unidirectional amount update former vernier
It is shared in memory node described in thread locked, including:
When the storage location and the inconsistent former vernier, it is used to indicate described in stopping deposit to focus thread feedback
The notification information of current data.
9. according to the method described in claim 4, it is characterized in that, when the storage location is consistent with the former vernier,
The former vernier is replaced with new vernier, the new vernier is equal in former the sum of the vernier and the unidirectional amount, including:
The notification information for renewing the current data is used to indicate to focus thread feedback.
10. a kind of no locking wire journey controls the device of storage information in round-robin queue, which is characterized in that described device includes:
Instruction calls unit, for the non-published state according to shared memory node in round-robin queue, at least two First Lines
Focus thread in journey calls CAS instruction to during the shared memory node deposit current data;
Status modifier unit, at the end of current data deposit, the non-published status modifier to be issued state, institute
It states issued state and is used to indicate and the current data can be read with the second thread;
Node allocation unit, the memory node for being in the issued state for second thread distribution;
Vernier updating unit is used for the storage location in the round-robin queue and the CAS according to the shared memory node
The consistency for instructing Central Plains vernier is that storage section is shared described in the focus thread locked with the unidirectional amount update former vernier
Point.
11. a kind of EDrop order server, which is characterized in that including:It memory, processor and is stored on the memory and can be
The computer program run on the processor, it is characterised in that:The processor realizes following steps when executing above procedure,
The step includes claim 1-9 the method steps.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810146120.0A CN108363625B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810146120.0A CN108363625B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108363625A true CN108363625A (en) | 2018-08-03 |
CN108363625B CN108363625B (en) | 2022-04-19 |
Family
ID=63005735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810146120.0A Active CN108363625B (en) | 2018-02-12 | 2018-02-12 | Method, device and server for orderly controlling storage information by lockless threads |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108363625B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110888727A (en) * | 2019-11-26 | 2020-03-17 | 北京达佳互联信息技术有限公司 | Method, device and storage medium for realizing concurrent lock-free queue |
CN111638854A (en) * | 2020-05-26 | 2020-09-08 | 北京同有飞骥科技股份有限公司 | Performance optimization method and device for NAS construction and SAN stack block equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070169123A1 (en) * | 2005-12-30 | 2007-07-19 | Level 3 Communications, Inc. | Lock-Free Dual Queue with Condition Synchronization and Time-Outs |
US8095727B2 (en) * | 2008-02-08 | 2012-01-10 | Inetco Systems Limited | Multi-reader, multi-writer lock-free ring buffer |
CN104077113A (en) * | 2014-07-10 | 2014-10-01 | 中船重工(武汉)凌久电子有限责任公司 | Method for achieving unlocked concurrence message processing mechanism |
CN105975349A (en) * | 2016-05-04 | 2016-09-28 | 北京智能管家科技有限公司 | Thread lock optimization method |
CN106648461A (en) * | 2016-11-15 | 2017-05-10 | 努比亚技术有限公司 | Memory management device and method |
-
2018
- 2018-02-12 CN CN201810146120.0A patent/CN108363625B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070169123A1 (en) * | 2005-12-30 | 2007-07-19 | Level 3 Communications, Inc. | Lock-Free Dual Queue with Condition Synchronization and Time-Outs |
US8095727B2 (en) * | 2008-02-08 | 2012-01-10 | Inetco Systems Limited | Multi-reader, multi-writer lock-free ring buffer |
CN104077113A (en) * | 2014-07-10 | 2014-10-01 | 中船重工(武汉)凌久电子有限责任公司 | Method for achieving unlocked concurrence message processing mechanism |
CN105975349A (en) * | 2016-05-04 | 2016-09-28 | 北京智能管家科技有限公司 | Thread lock optimization method |
CN106648461A (en) * | 2016-11-15 | 2017-05-10 | 努比亚技术有限公司 | Memory management device and method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110888727A (en) * | 2019-11-26 | 2020-03-17 | 北京达佳互联信息技术有限公司 | Method, device and storage medium for realizing concurrent lock-free queue |
CN111638854A (en) * | 2020-05-26 | 2020-09-08 | 北京同有飞骥科技股份有限公司 | Performance optimization method and device for NAS construction and SAN stack block equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108363625B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017133623A1 (en) | Data stream processing method, apparatus, and system | |
CN104750543B (en) | Thread creation method, service request processing method and relevant device | |
Kash et al. | No agent left behind: Dynamic fair division of multiple resources | |
US9703610B2 (en) | Extensible centralized dynamic resource distribution in a clustered data grid | |
CN106537863B (en) | Method and apparatus for concomitantly handling network packet | |
EP2386962B1 (en) | Programmable queue structures for multiprocessors | |
CN105700930B (en) | The application acceleration method and device of embedded OS | |
JP2015537307A (en) | Component-oriented hybrid cloud operating system architecture and communication method thereof | |
CN105302497B (en) | A kind of buffer memory management method and system | |
CN105242872B (en) | A kind of shared memory systems of Virtual cluster | |
JP2019535072A (en) | System and method for providing messages to multiple subscribers | |
JP6682668B2 (en) | System and method for using a sequencer in a concurrent priority queue | |
CN106775493B (en) | A kind of storage control and I/O Request processing method | |
CN103366022B (en) | Information handling system and disposal route thereof | |
CN109088829A (en) | A kind of data dispatching method, device, storage medium and equipment | |
CN108363625A (en) | A kind of no locking wire journey orderly controls the method, apparatus and server of storage information | |
CN108363624A (en) | A kind of no locking wire journey orderly controls the method, apparatus and server of storage information | |
JP2009059310A (en) | Program controller | |
CN105094751A (en) | Memory management method used for parallel processing of streaming data | |
WO2015123974A1 (en) | Data distribution policy adjustment method, device and system | |
CN106406764A (en) | A high-efficiency data access system and method for distributed SAN block storage | |
JP2010226275A (en) | Communication equipment and communication method | |
CN105208111B (en) | A kind of method and physical machine of information processing | |
JP2018133758A (en) | Communication system between virtual machines | |
JP2015162029A (en) | server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |