CN103488717B - Lock-free data gathering method and lock-free data gathering device - Google Patents
Lock-free data gathering method and lock-free data gathering device Download PDFInfo
- Publication number
- CN103488717B CN103488717B CN201310413005.2A CN201310413005A CN103488717B CN 103488717 B CN103488717 B CN 103488717B CN 201310413005 A CN201310413005 A CN 201310413005A CN 103488717 B CN103488717 B CN 103488717B
- Authority
- CN
- China
- Prior art keywords
- cache array
- cache
- packet
- thread
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a lock-free data gathering method and a lock-free data gathering device. The lock-free data gathering method includes: opening up at least one cache memory array in a cache, wherein attributes of each cache memory array include pointer extraction, pointer storage and cache marks used for marking whether pointers stored are ahead of the pointers extracted; establishing at least one storage thread used for storing data packages extracted from client sides in the cache memory arrays and at least one extraction thread used for extracting the data packages from the cache memory arrays so as to enable each cache memory array to be provided with a corresponding storage thread and a corresponding extraction thread; allowing the storage threads and the extraction threads corresponding to the cache memory arrays opened up to perform thread running or thread waiting according to the attributes of the corresponding cache memory arrays so as to extract the data packages in concurrence from at least one client side for convergence. By the lock-free data gather method and device, data convergence of multithreads by concurrently caching can be realized, and efficiency of the data convergence can be improved.
Description
Technical field
The present invention relates to Computer Applied Technology field, and in particular to technical field of data storage, more particularly, to no locks number
According to assemblage method and device.
Background technology
Convergence technology is to be processed many numbers evidence or information, is combined into and more effectively, more meets user's request
The process of data, when carrying out convergence, application program is after real-time receiving data from network it is necessary to the data receiving
Enter row cache, other application program takes out data from caching, and the data taken out is converged.
At present, by data be stored in caching, postpone access out data be " atom " operation, that is, a side when being operated, separately
One side does not operate to caching, under conventional situation, due to data traffic very little, typically takes the mode to caching " locking "
Ensure the atomicity of operation.But when data traffic rises to more than 10Mpbs, lock the speed of mode processing data
The requirement of application far can not be met.
Content of the invention
In view of this, the embodiment of the present invention provides a kind of Lock-free data gathering method and lock-free data gathering device, to solve background above skill
The technical problem that art part is mentioned.
The embodiment of the present invention employs the following technical solutions:
In a first aspect, embodiments providing a kind of Lock-free data gathering method and lock-free data gathering, including:
Open up at least one cache array in the buffer, the attribute of described cache array includes fetching pin, deposits
Pointer and for depositing cache tag whether before described fetching pin for the pointer described in identifying;
Create at least one and deposit thread and at least for the packet extracting from client is saved in cache array
One is used for taking out the line taking journey of packet from caching memory array, so that each cache array opened up all has one
Corresponding deposit thread and a corresponding line taking journey;
Make to deposit thread and line taking journey according to corresponding cache number corresponding to each cache array opened up
The attribute of group carries out thread operation or thread waits, and is extracted packet from least one client and is converged with parallel.
Second aspect, the embodiment of the present invention additionally provides one kind and no locks convergence device, including:
Cache array creating unit, for opening up at least one cache array in the buffer, in described caching
The attribute of poke group includes fetching pin, deposits pointer and for depositing caching mark whether before described fetching pin for the pointer described in identifying
Will;
Thread creation unit, for creating at least one for the packet extracting from client is saved in cache
Array deposits thread and at least one is used for taking out the line taking journey of packet from caching memory array, so that each caching opened up
Memory array all has one corresponding to deposit thread and a corresponding line taking journey;
Thread running unit, for making to deposit thread and line taking journey according to institute corresponding to each cache array opened up
The attribute of corresponding cache array carries out thread operation or thread waits, and extracts data with parallel from least one client
Wrap and converged.
The Advantageous Effects of the main case of technology that the embodiment of the present invention proposes are:
The embodiment of the present invention is passed through to open up at least one cache array in the buffer, creates at least one for will be from
The packet that client is extracted is saved in cache array and deposits thread with least one for taking out from caching memory array
The line taking journey of packet, so that each cache array opened up all has one corresponding to deposit thread and a corresponding line taking
Journey, makes to deposit the thread and line taking journey genus according to corresponding cache array corresponding to each cache array opened up
Property carry out thread operation or thread and wait, extract packet from least one client and converged with parallel.The present invention is real
Apply example avoid to caching " locking " and limit the extraction efficiency of packet, enable multiple threads concomitantly from by cache into
Row convergence, can improve the efficiency of convergence.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to institute in embodiment of the present invention description
Need use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only the present invention some enforcement
Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also be implemented according to the present invention
The content of example and these accompanying drawings obtain other accompanying drawings.
Fig. 1 is the Lock-free data gathering method and lock-free data gathering flow chart described in the specific embodiment of the invention one;
Fig. 2 is the no lock convergence schematic diagram described in the specific embodiment of the invention one;
Fig. 3 is the method flow diagram depositing thread described in the specific embodiment of the invention two;
Fig. 4 is the method flow diagram of the line taking journey described in the specific embodiment of the invention two;
Fig. 5 is the no lock convergence schematic diagram described in the specific embodiment of the invention two;
Fig. 6 is the structured flowchart no locking convergence device described in the specific embodiment of the invention three;
Fig. 7 is the structured flowchart no locking convergence device described in the specific embodiment of the invention four.
Specific embodiment
For make present invention solves the technical problem that, the technical scheme that adopts and the technique effect that reaches clearer, below
By combine accompanying drawing the technical scheme of the embodiment of the present invention is described in further detail it is clear that described embodiment only
It is a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those skilled in the art exist
The every other embodiment being obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
Further illustrate technical scheme below in conjunction with the accompanying drawings and by specific embodiment.
Embodiment one
Fig. 1 is the Lock-free data gathering method and lock-free data gathering flow chart described in the present embodiment, as shown in figure 1, the nothing described in the present embodiment
Lock convergence method includes:
S101, open up at least one cache array in the buffer.
Server opens up at least one cache array in the buffer, and the attribute of the cache array being created includes
Fetching pin, deposit pointer and for depositing cache tag whether before described fetching pin for the pointer described in identifying.
In order that corresponding to cache array deposit thread and line taking journey can share the attribute of this cache array, protect
Demonstrate,prove this cache array deposit thread and line taking journey the corresponding cache array of operation fetching pin, deposit pointer and ease up
Deposit concordance during mark, can by the fetching pin of each cache array, deposit pointer and cache tag may be configured as volatile
The variable storage of type is in the depositor of server.
S102, create at least one for by the packet extracting from client be saved in cache array deposit thread and
At least one is used for taking out the line taking journey of packet from caching memory array.
Server creates at least one and deposits thread for the packet extracting from client is saved in cache array
It is used for taking out the line taking journey of packet from caching memory array with least one, so that each cache array opened up is equal
Meet:One and only one is corresponding to deposit thread, one and only one corresponding line taking journey.
Deposit thread for data storage bag in corresponding cache array, according to the genus of corresponding caching array
Property and packet to be stored size, decide whether to carry out thread wait, after the completion of storage, the corresponding cache of modification
The attribute of array.
Line taking journey is used for extracting packet from corresponding cache array, after the completion of extraction, corresponding to modification
The attribute of cache array.
S103, make to deposit thread and line taking journey according in corresponding caching corresponding to each cache array opened up
The attribute of poke group carries out thread operation or thread waits, and is extracted packet from least one client and is converged with parallel
Poly-.
Server, after step S102 has created thread, can run all threads being created, so that each cache
Corresponding to array deposit thread and line taking journey carries out storage and the extraction of data.
For example, as shown in Fig. 2 server end creates cache array 1, cache array 2, cache array
3 and caching memory array M etc., respectively cache array 1, cache array 2, cache array 3 and cache
Array M create corresponding deposit thread 1, deposit thread 2, deposit thread 3, deposit thread M, be cache array 1, cache array
2 create corresponding line taking journey 1, are that cache array 3 creates corresponding line taking journey 2 with caching memory array M.
Deposit thread 1, deposit thread 2, deposit thread 3, deposit thread M, line taking journey 1 and line taking journey 2 parallel running, will from client 1,
The packet that client 2, client 3 and client M are extracted converges.For example, deposit thread 1 and extract packet from client 1,
The attributive judgment of the size according to the packet being extracted and caching memory array 1 carries out thread and waits also being by packet and depositing
Storage, if carrying out packet storage, after storage completes, modifies to the attribute of cache array 1.Line taking journey 1 be used for from
Cache array 1 mentions packet, often extracts a packet, just the attribute of cache array 1 is once changed.
Technical scheme described in the present embodiment, eliminates the mode that caching is locked of the prior art it is proposed that one kind
Lock-free data gathering method and lock-free data gathering, by opening up cache array, creates and deposits thread and line taking journey, so that in each caching opened up
Poke group all have one corresponding deposit thread and a corresponding line taking journey, make corresponding to each cache array deposit thread and
Line taking journey concomitantly can access data in cache array, thus realizing multiple threads concomitantly from by caching into line number
According to convergence, the efficiency of convergence can be improved.
Embodiment two
Compared with embodiment one, this embodiment introduces buffer address queue, and specifically to deposit thread and line taking journey according to
The method depositing the storage and extraction that pointer, fetching pin and cache tag carry out packet of corresponding cache array is carried out
It is expanded on further.
The Lock-free data gathering method and lock-free data gathering that the present embodiment proposes includes:
The first step:Server opens up at least one cache array in the buffer, the cache array being created
Attribute includes fetching pin, deposits pointer and for depositing cache tag whether before described fetching pin for the pointer described in identifying.
Second step:Server creates at least one for the packet extracting from client is saved in cache array
Deposit thread and at least one is used for taking out the line taking journey of packet from caching memory array, so that each cache opened up
Array is satisfied by:One and only one is corresponding to deposit thread, one and only one corresponding line taking journey.
3rd step:Server is respectively each line taking journey and opens up at least one corresponding buffer address queue, institute in the buffer
The each buffer address queue opened up is for storing the ground of the packet being preserved in the cache array that at least one is opened up
Location object, wherein said address object includes the cache array mark of the cache array for identification data packet place
Know.
Further described address object also includes stored packet initial address data length in the buffer.
Or, described address object also includes starting point in corresponding cache array for the stored packet
The relative address data length of location.
4th step:Server makes to deposit thread and line taking journey according to corresponding corresponding to each cache array opened up
The attribute of cache array carry out thread operation or thread and wait, extract packet simultaneously with parallel from least one client
Converged.
Further, in the 4th step, server makes to deposit thread and line taking corresponding to each cache array opened up
Journey carries out thread operation according to the attribute of corresponding cache array or the method for thread wait specifically includes:
Server makes the thread of depositing corresponding to cache array be used for:Receive the data sending at least one client
Bag, received data bag is deposited by the attribute according to corresponding cache array to corresponding cache array
Storage, the address information of the packet being stored is formed address object and stores taking corresponding to corresponding cache array
In buffer address queue corresponding to thread;
Server makes the line taking journey corresponding to cache array be used for:Extract ground from corresponding buffer address queue
Location object, obtains the attribute of the cache array corresponding to cache array mark in extracted address object, root
Extract packet according to acquired attribute with the address object being extracted.
Specifically, Fig. 3 is the method flow diagram depositing thread described in the present embodiment, as shown in figure 3, in the present embodiment, depositing
The method of thread includes:
S301, the size of acquisition packet to be stored.
S302, obtain the fetching pin that this deposits the cache array corresponding to thread, deposit pointer and cache tag.
S303, judge whether cache tag is true, if then execution step S304, otherwise execution step S307.
S304, judge fetching pin with whether deposit pointer equal, if then execution step S305, otherwise execution step S306.
S305, make this deposit thread wait.
S306, judgement deposit the size whether space between fetching pin for the pointer is less than packet to be stored, if then holding
Row step S305, otherwise execution step S310.
Whether the space that S307, judgement deposit between pointer and the last address of cache array is less than packet to be stored
Size, if then execution step S308, otherwise execution step S310.
S308, judge whether this deposits the space between fetching pin for the initial address of the cache array corresponding to thread
Less than to be stored, if then execution step S305, otherwise execution step S309.
The purpose of this step is:If the insufficient space at the end to described cache array for the described fetching pin is treated with storing
Data storage bag, then skip this space, packet to be stored is stored the initial address of cache array.
In another embodiment of the invention, this step is alternatively:The remaining space judging described cache array is
The no size less than packet to be stored, if then execution step S305, otherwise execution step S309.
S309, cache tag is changed to very, execution step S310.
S310, treat data storage bag and stored, terminate.
Wherein said treat the step that data storage bag stored and specifically include:According to corresponding cache array
That deposits that pointer stores packet to be stored in corresponding cache array and change corresponding cache array deposits finger
Pin, address in corresponding cache array for the packet to be stored is recorded corresponding to corresponding cache array
The buffer address queue corresponding to line taking journey in.
Finally, the packet described at least two line taking journeys being extracted converges.
Specifically, Fig. 4 is the method flow diagram of the line taking journey described in the present embodiment, as shown in figure 4, in the present embodiment, taking
The method of thread includes:
S401, from corresponding buffer address queue extract address object.
S402, according to the address object being extracted in described address object cache array identify corresponding caching
Packet is extracted in memory array.
If it should be noted that the address object in the present embodiment includes the cache array mark of cache array
Know, the packet that stored initial address data length in the buffer, then directly according to described packet in the buffer
Initial address data length directly reads data;If the address object in the present embodiment is included in the caching of cache array
Poke group mark, the packet that the stored initial address in corresponding cache array relative address data long
Degree, then first pass through described cache array mark and find corresponding cache array, from described cache array
Fetching pin starts, and takes out the data that data length is the data length in described data object.
S403, the cache array changed according to described address object in described address object identify in corresponding caching
Fetching pin in the attribute of poke group.
S404, judge described fetching pin with change before compared with whether diminish, if then execution step S405, otherwise execute step
Rapid S406.
S405, described cache tag is set to no.
S406, do not change described cache tag.
For example, as shown in figure 5, server end creates cache array 1, cache array 2, cache array
3 and caching memory array M etc.;It is respectively cache array 1, cache array 2, cache array 3 and cache
Array M creates corresponding deposit thread 1, deposits thread 2, deposit thread 3, deposit thread M;For cache array 1, cache array
2 create corresponding line taking journey 1, are that cache array 3 creates corresponding line taking journey 2 with caching memory array M;For line taking
Journey 1 creates buffer address queue 1, is that line taking journey 2 creates buffer address queue 2.
Deposit thread 1, deposit thread 2, deposit thread 3, deposit thread M, line taking journey 1 and line taking journey 2 parallel running, will from client 1,
The packet that client 2, client 3 and client M are extracted converges.For example, deposit thread 1 and extract packet from client 1,
The attributive judgment of the size according to the packet being extracted and caching memory array 1 carries out thread and waits also being by packet and depositing
Storage, if carrying out packet storage, after storage completes, modifies to the attribute of cache array 1, and by the number being stored
Form address object according to the address information of bag to store in address queue 1.Line taking journey 1 is used for extracting from caching address queue 1
Address object, the packet that this address object points to is stored in cache array 1, obtains the attribute of cache array 1,
Attribute according to cache array 1 and the address object being extracted extract packet.
Compared with embodiment one, this embodiment introduces buffer address queue, respectively each line taking journey is opened up in the buffer
At least one corresponding buffer address queue, each buffer address queue opened up is for storing the caching that at least one is opened up
The address object of the packet being preserved in memory array, respectively deposits thread and line taking journey respectively according to corresponding cache number
The fetching pin of group, deposit pointer and cache tag to control thread progress to ensure efficiently concomitantly to access data, data can be improved
The efficiency converging.
Embodiment three
Fig. 6 is the structured flowchart no locking convergence device described in the present embodiment, as shown in fig. 6, described in the present embodiment
No lock convergence device include:
Cache array creating unit 601, for opening up at least one cache array, described caching in the buffer
The attribute of memory array includes fetching pin, deposits pointer and for depositing caching whether before described fetching pin for the pointer described in identifying
Mark.
Server opens up at least one cache array in the buffer, and the attribute of the cache array being created includes
Fetching pin, deposit pointer and for depositing cache tag whether before described fetching pin for the pointer described in identifying.
In order that corresponding to cache array deposit thread and line taking journey can share the attribute of this cache array, protect
Demonstrate,prove this cache array deposit thread and line taking journey the corresponding cache array of operation fetching pin, deposit pointer and ease up
Deposit concordance during mark, can by the fetching pin of each cache array, deposit pointer and cache tag may be configured as volatile
The variable storage of type is in the depositor of server.
Thread creation unit 602, for creating at least one for the packet extracting from client is saved in caching
Memory array deposits thread and at least one is used for taking out the line taking journey of packet from caching memory array, so that is opened up is each
Cache array all has one corresponding to deposit thread and a corresponding line taking journey.
Server creates at least one and deposits thread for the packet extracting from client is saved in cache array
It is used for taking out the line taking journey of packet from caching memory array with least one, so that each cache array opened up is equal
Meet:One and only one is corresponding to deposit thread, one and only one corresponding line taking journey.
Deposit thread for data storage bag in corresponding cache array, according to the genus of corresponding caching array
Property and packet to be stored size, decide whether to carry out thread wait, after the completion of storage, the corresponding cache of modification
The attribute of array.
Line taking journey is used for extracting packet from corresponding cache array, after the completion of extraction, corresponding to modification
The attribute of cache array.
Thread running unit 603, for making to deposit thread and line taking journey root corresponding to each cache array opened up
Carry out thread operation according to the attribute of corresponding cache array or thread waits, extracted from least one client with parallel
Packet is simultaneously converged.
Technical scheme described in the present embodiment, eliminates the mode that caching is locked of the prior art it is proposed that one kind
Lock-free data gathering method and lock-free data gathering, by opening up cache array, creates and deposits thread and line taking journey, so that in each caching opened up
Poke group all have one corresponding deposit thread and a corresponding line taking journey, make corresponding to each cache array deposit thread and
Line taking journey concomitantly can access data in cache array, thus realizing multiple threads concomitantly from by caching into line number
According to convergence, the efficiency of convergence can be improved.
Example IV
Fig. 7 is the structured flowchart no locking convergence device described in the present embodiment, as shown in fig. 7, described in the present embodiment
No lock convergence device include:
Cache array creating unit 701, for opening up at least one cache array, described caching in the buffer
The attribute of memory array includes fetching pin, deposits pointer and for depositing caching whether before described fetching pin for the pointer described in identifying
Mark.
Thread creation unit 702, for creating at least one for the packet extracting from client is saved in caching
Memory array deposits thread and at least one is used for taking out the line taking journey of packet from caching memory array, so that is opened up is each
Cache array all has one corresponding to deposit thread and a corresponding line taking journey.
Address queue's creating unit 703, is used for carry from client for creating at least one in thread creating unit 702
The packet taking is saved in cache array and deposits thread with least one for taking out packet from caching memory array
After line taking journey, respectively each line taking journey opens up at least one corresponding buffer address queue in the buffer, and that is opened up is each slow
Deposit address queue for storing the address object of the packet being preserved in the cache array that at least one is opened up, wherein
Described address object includes the cache array mark of the cache array for identification data packet place.
Thread running unit 704, for making to deposit thread and line taking journey root corresponding to each cache array opened up
Carry out thread operation according to the attribute of corresponding cache array or thread waits, extracted from least one client with parallel
Packet is simultaneously converged.
Further, thread running unit 704 includes depositing thread operation subelement 7041 and line taking journey runs subelement
7042.
Deposit thread and run subelement 7041 for making the thread of depositing corresponding to cache array be used for:Receive at least one
The packet sending in client, the attribute according to corresponding cache array is by received data bag to corresponding
Cache array is stored, and the address information of the packet being stored is formed address object and stores corresponding caching
In the buffer address queue corresponding to line taking journey corresponding to memory array.
Line taking journey is run subelement 7042 and is used for making the line taking journey corresponding to cache array be used for:Delay from corresponding
Deposit extraction address object in address queue, obtain the caching corresponding to cache array mark in extracted address object
The attribute of memory array, extracts packet according to acquired attribute with the address object being extracted.
Further, described thread of depositing runs the attribute according to corresponding cache array described in subelement 7041
Received data bag is carried out storage to corresponding cache array include:
Obtain the size of received data bag, obtain the corresponding fetching pin of cache array, deposit pointer and ease up
Deposit mark;
If it is unequal with described pointer of depositing that described cache tag is true and described fetching pin, described in judgement, deposit pointer to institute
State the size whether space between fetching pin is less than received data bag, if deposit thread described in then making waiting, otherwise right
Received data bag is stored;
If it is equal with described pointer of depositing that described cache tag is true and described fetching pin, make described in deposit thread wait;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is little
In the size of received data bag, then whether the remaining space judging corresponding cache array is less than described to be stored
Described cache tag, if deposit thread described in then making waiting, is otherwise changed to very, to received data by the size of packet
Bag is stored;
In another embodiment of the invention, above-mentioned steps are alternatively:If described cache tag is false, and described deposit pointer
Space and the last address of described cache array between is less than the size of received data bag, then judge in described caching
Whether the space between described fetching pin for the initial address of poke group is less than the size of described packet to be stored, if then making
Described thread of depositing waits, and otherwise is changed to very, received data bag be stored by described cache tag;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is not
Less than the size of described packet to be stored, then received data bag is stored;
The wherein said step that received data bag is stored includes:According to described pointer of depositing by the number being received
Deposit pointer described in storing in corresponding cache array according to bag and changing, received data bag is delayed in corresponding
Deposit in the buffer address queue specified by corresponding cache array of address object record in memory array.
Further, described line taking journey is run described in subelement 7042 according to acquired attribute and the address extracted
Object extraction packet specifically includes:
Corresponding cache is identified to the cache array in described address object according to the address object being extracted
Packet is extracted in array;
The cache array changed according to described address object in described address object identifies corresponding cache number
Fetching pin in the attribute of group;
Judge that described address object is changed the cache array in described address object and identified corresponding cache number
Whether the fetching pin in the attribute of group diminishes compared with before change, if then changing described address object in described address object
Cache array identify corresponding cache array attribute in cache tag be set to no.
Further, also to include stored packet initial address data in the buffer long for described address object
Degree.
Further, described address object also includes stored packet rising in corresponding cache array
The relative address data length of beginning address.
Compared with embodiment three, this embodiment introduces address queue's creating unit 703, exist for being respectively each line taking journey
At least one corresponding buffer address queue is opened up, each buffer address queue opened up is used for storing at least one institute in caching
The address object of the packet being preserved in the cache array opened up is so that respectively deposit thread and line taking journey is right according to institute respectively
The fetching pin of the cache array answered, deposit pointer and cache tag to control thread progress to ensure efficiently concomitantly to access number
According to the efficiency of convergence can be improved.
The following is to data extraction efficiency under the technical scheme described in the embodiment of the present invention with traditional to caching " locking "
Mode situation about being compared come the experimental result to access data:
Network environment:LAN gigabit bandwidth.
Transmission means:Carried out data transmission using ICP/IP protocol.
Result:Packet in 200B about when, data take out speed in 25-35Mpbs;Between data is in 1KB to 4KB
When, data takes out speed only in 70-80Mpbs;When data is in more than 1M, data takes out speed only in more than 80Mpbs.
And when being tested by the way of each cache array in caching is locked, in different pieces of information bag size cases
Under, data takes out speed only in 10Mpbs, because the thread that there is no lock can only wait, and thread is entered by waiting state
Row executable state can consume ample resources again, lead to overall access rate cannot meet large-scale data transmission at all.
All or part of content in the technical scheme that above example provides can be realized by software programming, its software
, in the storage medium that can read, storage medium is for example for program storage:Hard disk in computer, CD or floppy disk.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that
The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious changes,
Readjust and substitute without departing from protection scope of the present invention.Therefore although being carried out to the present invention by above example
It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also
Other Equivalent embodiments more can be included, and the scope of the present invention is determined by scope of the appended claims.
Claims (10)
1. a kind of Lock-free data gathering method and lock-free data gathering is it is characterised in that include:
Open up at least one cache array in the buffer, the attribute of described cache array includes fetching pin, deposits pointer
With for depositing cache tag whether before described fetching pin for the pointer described in identifying;
Create at least one for by the packet extracting from client be saved in cache array deposit thread and at least one
For taking out the line taking journey of packet from caching memory array, so that each cache array opened up all has a correspondence
Deposit thread and a corresponding line taking journey;
It is respectively each line taking journey and open up at least one corresponding buffer address queue in the buffer, each buffer address team being opened up
Arrange the address object for storing the packet being preserved in the cache array that at least one is opened up, wherein said address
Object includes the cache array mark of the cache array for identification data packet place;
Make to deposit thread and line taking journey according to corresponding cache array corresponding to each cache array opened up
Attribute carries out thread operation or thread waits, and is extracted packet from least one client and is converged with parallel, wherein, makes
Thread of depositing corresponding to cache array is used for:Receive the packet sending at least one client, according to corresponding
Received data bag is stored by the attribute of cache array to corresponding cache array, by the number being stored
Form address object according to the address information of bag and store delaying corresponding to the line taking journey corresponding to corresponding cache array
Deposit in address queue;
The line taking journey corresponding to cache array is made to be used for:Extract address object from corresponding buffer address queue, obtain
The cache array in extracted address object is taken to identify the attribute of corresponding cache array, according to acquired
Attribute and the address object that extracted extract packet, and wherein, the attribute of the cache array corresponding to described basis is by institute
The step that the packet receiving is stored to corresponding cache array includes:
Obtain the size of received data bag, obtain the corresponding fetching pin of cache array, deposit pointer and caching mark
Will;
If it is unequal with described pointer of depositing that described cache tag is true and described fetching pin, deposit pointer described in judgement and take to described
Whether the space between pointer is less than the size of received data bag, if deposit thread described in then making waiting, otherwise to being connect
The packet received is stored;
If it is equal with described pointer of depositing that described cache tag is true and described fetching pin, make described in deposit thread wait;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is less than institute
The size of the packet receiving, then whether the remaining space judging corresponding cache array is less than packet to be stored
Described cache tag, if deposit thread described in then making waiting, otherwise is changed to very, received data bag be deposited by size
Storage;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is not less than
The size of described packet to be stored, then store to received data bag;
The wherein said step that received data bag is stored includes:According to described pointer of depositing by received data bag
Pointer is deposited, by received data bag in corresponding caching described in storing in corresponding cache array and changing
In buffer address queue specified by corresponding cache array for the address object record in poke group.
2. Lock-free data gathering method and lock-free data gathering as claimed in claim 1 is it is characterised in that cache number corresponding to described basis
The step that received data bag is stored to corresponding cache array is included by the attribute of group:
Obtain the size of received data bag, obtain the corresponding fetching pin of cache array, deposit pointer and caching mark
Will;
If it is unequal with described pointer of depositing that described cache tag is true and described fetching pin, deposit pointer described in judgement and take to described
Whether the space between pointer is less than the size of received data bag, if deposit thread described in then making waiting, otherwise to being connect
The packet received is stored;
If it is equal with described pointer of depositing that described cache tag is true and described fetching pin, make described in deposit thread wait;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is less than institute
Whether the size of the packet receiving, then judge the initial address of the described cache array space between described fetching pin
Less than the size of described packet to be stored, if deposit thread described in then making waiting, otherwise described cache tag is changed to very,
Received data bag is stored;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is not less than
The size of described packet to be stored, then store to received data bag;
The wherein said step that received data bag is stored includes:According to described pointer of depositing by received data bag
Pointer is deposited, by received data bag in corresponding caching described in storing in corresponding cache array and changing
In buffer address queue specified by corresponding cache array for the address object record in poke group.
3. Lock-free data gathering method and lock-free data gathering as claimed in claim 1 is it is characterised in that according to acquired attribute and extracted
The step that address object extracts packet specifically includes:
Corresponding cache array is identified to the cache array in described address object according to the address object being extracted
Middle extraction packet;
The cache array changed according to described address object in described address object identifies corresponding cache array
Fetching pin in attribute;
Judge that described address object is changed the cache array in described address object and identified corresponding cache array
Whether the fetching pin in attribute diminishes compared with before change, if then changing described address object slow in described address object
Deposit memory array identify corresponding cache array attribute in cache tag be set to no.
4. Lock-free data gathering method and lock-free data gathering as claimed in claim 1 is it is characterised in that what described address object also included being stored
Packet initial address data length in the buffer.
5. Lock-free data gathering method and lock-free data gathering as claimed in claim 1 is it is characterised in that what described address object also included being stored
The relative address data length of initial address in corresponding cache array for the packet.
6. one kind no locks convergence device it is characterised in that including:
Cache array creating unit, for opening up at least one cache array, described cache number in the buffer
The attribute of group includes fetching pin, deposits pointer and for depositing cache tag whether before described fetching pin for the pointer described in identifying;
Thread creation unit, for creating at least one for the packet extracting from client is saved in cache array
Deposit thread and at least one is used for taking out the line taking journey of packet from caching memory array, so that each cache opened up
Array all has one corresponding to deposit thread and a corresponding line taking journey;
Address queue's creating unit, for creating at least one for the packet that will extract from client in thread creating unit
Be saved in cache array deposit thread and at least one be used for from caching memory array take out packet line taking journey after,
It is respectively each line taking journey and opens up at least one corresponding buffer address queue in the buffer, each buffer address queue opened up is used
The address object of the packet being preserved in the cache array that at least one is opened up in storage, wherein said address object
Cache array mark including the cache array being located for identification data packet;
Thread running unit, for making to deposit thread and line taking journey according to corresponding corresponding to each cache array opened up
The attribute of cache array carry out thread operation or thread and wait, extract packet simultaneously with parallel from least one client
Converged, described thread running unit includes depositing thread operation subelement and line taking journey runs subelement, deposits thread and runs son
Unit is used for making the thread of depositing corresponding to cache array be used for:Receive the packet sending at least one client, root
Received data bag is stored by the attribute according to corresponding cache array to corresponding cache array, will
The address information of the packet being stored forms address object and stores the corresponding line taking journey corresponding to cache array
In corresponding buffer address queue;Line taking journey is run subelement and is used for making the line taking journey corresponding to cache array be used for:
Extract address object from corresponding buffer address queue, obtain the cache array mark in extracted address object
The attribute of corresponding cache array, extracts packet according to acquired attribute with the address object being extracted, wherein,
Described thread of depositing runs the attribute according to corresponding cache array described in subelement by received data bag to institute
Corresponding cache array carries out storage and includes:Obtain the size of received data bag, obtain corresponding cache
The fetching pin of array, deposit pointer and cache tag;If described cache tag is true and described fetching pin deposits pointer not phase with described
Deng, then judge described in deposit the size whether space to described fetching pin between for the pointer is less than received data bag, if then
Deposit thread described in making to wait, otherwise received data bag is stored;If described cache tag is true and described fetching pin
Equal with described pointer of depositing, then make described in deposit thread wait;If described cache tag is false, and described deposit pointer and described caching
Space between the last address of memory array is less than the size of received data bag, then judge corresponding cache array
Remaining space whether less than the size of packet to be stored, if deposit thread described in then making waiting, otherwise by described caching mark
Will is changed to very, received data bag be stored;If described cache tag is false, and described deposit pointer and described caching
Space between the last address of memory array is not less than the size of described packet to be stored, then received data bag is carried out
Storage;The wherein said step that received data bag is stored includes:According to described pointer of depositing by received data
Wrap and deposit pointer, by received data bag in corresponding caching described in storing in corresponding cache array and changing
In buffer address queue specified by corresponding cache array for the address object record in memory array.
7. no lock convergence device as claimed in claim 6 is it is characterised in that described thread of depositing runs described in subelement
Received data bag is stored by the attribute according to corresponding cache array to corresponding cache array
Including:
Obtain the size of received data bag, obtain the corresponding fetching pin of cache array, deposit pointer and caching mark
Will;
If it is unequal with described pointer of depositing that described cache tag is true and described fetching pin, deposit pointer described in judgement and take to described
Whether the space between pointer is less than the size of received data bag, if deposit thread described in then making waiting, otherwise to being connect
The packet received is stored;
If it is equal with described pointer of depositing that described cache tag is true and described fetching pin, make described in deposit thread wait;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is less than institute
Whether the size of the packet receiving, then judge the initial address of the described cache array space between described fetching pin
Less than the size of described packet to be stored, if deposit thread described in then making waiting, otherwise described cache tag is changed to very,
Received data bag is stored;
If described cache tag is false, and the space between the described last address depositing pointer and described cache array is not less than
The size of described packet to be stored, then store to received data bag;
The wherein said step that received data bag is stored includes:According to described pointer of depositing by received data bag
Pointer is deposited, by received data bag in corresponding caching described in storing in corresponding cache array and changing
In buffer address queue specified by corresponding cache array for the address object record in poke group.
8. no lock convergence device as claimed in claim 6 is it is characterised in that described line taking journey is run described in subelement
Extract packet according to acquired attribute and the address object being extracted to specifically include:
Corresponding cache array is identified to the cache array in described address object according to the address object being extracted
Middle extraction packet;
The cache array changed according to described address object in described address object identifies corresponding cache array
Fetching pin in attribute;
Judge that described address object is changed the cache array in described address object and identified corresponding cache array
Whether the fetching pin in attribute diminishes compared with before change, if then changing described address object slow in described address object
Deposit memory array identify corresponding cache array attribute in cache tag be set to no.
9. no lock convergence device as claimed in claim 6 is it is characterised in that what described address object also included being stored
Packet initial address data length in the buffer.
10. no lock convergence device as claimed in claim 6 is it is characterised in that described address object also includes being stored
Initial address in corresponding cache array for the packet relative address data length.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310413005.2A CN103488717B (en) | 2013-09-11 | 2013-09-11 | Lock-free data gathering method and lock-free data gathering device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310413005.2A CN103488717B (en) | 2013-09-11 | 2013-09-11 | Lock-free data gathering method and lock-free data gathering device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103488717A CN103488717A (en) | 2014-01-01 |
CN103488717B true CN103488717B (en) | 2017-02-22 |
Family
ID=49828943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310413005.2A Active CN103488717B (en) | 2013-09-11 | 2013-09-11 | Lock-free data gathering method and lock-free data gathering device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103488717B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850507B (en) * | 2014-02-18 | 2019-03-15 | 腾讯科技(深圳)有限公司 | A kind of data cache method and data buffer storage |
CN106354572A (en) * | 2016-08-31 | 2017-01-25 | 成都科来软件有限公司 | Multi-thread data transmission method |
CN106789917B (en) * | 2016-11-25 | 2019-10-01 | 北京百家互联科技有限公司 | Data package processing method and device |
CN106909321A (en) * | 2017-02-24 | 2017-06-30 | 郑州云海信息技术有限公司 | A kind of control method and device based on storage system |
CN113176896B (en) * | 2021-03-19 | 2022-12-13 | 中盈优创资讯科技有限公司 | Method for randomly taking out object based on single-in single-out lock-free queue |
CN113778674A (en) * | 2021-08-31 | 2021-12-10 | 上海弘积信息科技有限公司 | Lock-free implementation method of load balancing equipment configuration management under multi-core |
CN114900713B (en) * | 2022-07-13 | 2022-09-30 | 深圳市必提教育科技有限公司 | Video clip processing method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127989A (en) * | 2007-09-11 | 2008-02-20 | 中兴通讯股份有限公司 | A method for supporting hypertext transmission stream media service of mobile phone |
CN101631139A (en) * | 2009-05-19 | 2010-01-20 | 华耀环宇科技(北京)有限公司 | Load balancing software architecture based on multi-core platform and method therefor |
CN102053923A (en) * | 2009-11-05 | 2011-05-11 | 北京金山软件有限公司 | Storage method and storage device for logbook data |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8737417B2 (en) * | 2010-11-12 | 2014-05-27 | Alcatel Lucent | Lock-less and zero copy messaging scheme for telecommunication network applications |
-
2013
- 2013-09-11 CN CN201310413005.2A patent/CN103488717B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101127989A (en) * | 2007-09-11 | 2008-02-20 | 中兴通讯股份有限公司 | A method for supporting hypertext transmission stream media service of mobile phone |
CN101631139A (en) * | 2009-05-19 | 2010-01-20 | 华耀环宇科技(北京)有限公司 | Load balancing software architecture based on multi-core platform and method therefor |
CN102053923A (en) * | 2009-11-05 | 2011-05-11 | 北京金山软件有限公司 | Storage method and storage device for logbook data |
Also Published As
Publication number | Publication date |
---|---|
CN103488717A (en) | 2014-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103488717B (en) | Lock-free data gathering method and lock-free data gathering device | |
CN104252405B (en) | The output intent and device of log information | |
CN101267361B (en) | A high-speed network data packet capturing method based on zero duplication technology | |
CN102882810B (en) | A kind of packet fast forwarding method and device | |
CN102195874A (en) | Pre-fetching of data packets | |
CN105224255B (en) | A kind of storage file management method and device | |
CN206775541U (en) | Distributed game services system | |
CN104063344B (en) | A kind of method and network interface card for storing data | |
CN109688069A (en) | A kind of method, apparatus, equipment and storage medium handling network flow | |
CN105095109B (en) | cache access method, cache access router and computer system | |
CN101841438B (en) | Method or system for accessing and storing stream records of massive concurrent TCP streams | |
CN102064977A (en) | Graphics processing unit (GPU) based method for detecting message content of high-speed network | |
CN111382327A (en) | Character string matching device and method | |
CN111182008B (en) | Establishing socket connections in user space | |
CN102420771B (en) | Method for increasing concurrent transmission control protocol (TCP) connection speed in high-speed network environment | |
TW200541260A (en) | System security approach methods using state tables, related computer-readable medium, and related systems | |
CN104461716B (en) | The access method and multi-core heterogeneous system of a kind of multi-core heterogeneous system | |
CN103559017B (en) | Character string matching method based on GPU heterogeneous computing platforms and system | |
CN103617142B (en) | A kind of express network collecting method based on pf_ring | |
CN107357630A (en) | A kind of method, apparatus and storage medium for realizing that virtual machine is synchronous | |
CN110493641A (en) | A kind of video file encryption and decryption method and device | |
CN110020046A (en) | A kind of data grab method and device | |
CN107102897A (en) | A kind of database active defense method of many GPU parallel processings | |
CN109471843A (en) | A kind of metadata cache method, system and relevant apparatus | |
CN103491025B (en) | A kind of method and device of application traffic identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |