CN117891625A - Data sharing method, device, equipment and storage medium - Google Patents

Data sharing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117891625A
CN117891625A CN202410071214.1A CN202410071214A CN117891625A CN 117891625 A CN117891625 A CN 117891625A CN 202410071214 A CN202410071214 A CN 202410071214A CN 117891625 A CN117891625 A CN 117891625A
Authority
CN
China
Prior art keywords
processed
bucket
hash
hash bucket
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410071214.1A
Other languages
Chinese (zh)
Inventor
陈振
伍开胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN202410071214.1A priority Critical patent/CN117891625A/en
Publication of CN117891625A publication Critical patent/CN117891625A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of advertisement data processing, in particular to a data sharing method, a device, equipment and a storage medium.

Description

Data sharing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of advertisement data processing technologies, and in particular, to a data sharing method, apparatus, device, and storage medium.
Background
The advertisement engine system is internally provided with a large amount of data to be shared among threads and updated in real time, the advertisement engine can store the data to be shared in a hash table mode in the process of data sharing, and when the data sharing is carried out, the advertisement engine system can directly accept a large amount of concurrent working threads from a user side, and the working threads can realize the data sharing between the user side and the advertisement engine system by reading the data in the hash table, but the data storage in the traditional advertisement engine system is difficult to bear the simultaneous reading and writing of a large amount of working threads, so that the data sharing efficiency and the data throughput of the advertisement engine system are affected.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a data sharing method, a device, equipment and a storage medium, and aims to solve the technical problems that in the prior art, when an advertisement search engine performs data sharing, the throughput of data and the efficiency of thread reading and writing of data are low.
In order to achieve the above object, the present invention provides a data sharing method, which includes the steps of:
when receiving a data read-write request sent by at least one working thread, inquiring a to-be-processed hash bucket corresponding to each working thread in a preset lock-free hash table according to each data read-write request, wherein the preset lock-free hash table comprises hash values of at least one to-be-shared data in an advertisement engine system;
reading a preset memory value of the hash bucket to be processed;
Calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
and driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
Optionally, the calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed, including:
when a plurality of candidate working threads wait to read and write the hash bucket to be processed, determining a pre-stored expected value of each candidate working thread;
And calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
Optionally, the calling each candidate working thread to perform authority competition on the hash bucket to be processed according to each pre-stored expected value and the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed, including:
Matching each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
Optionally, the determining the target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread includes:
When the preset memory value is successfully matched with the pre-stored expected value, determining a target working thread corresponding to the pre-stored expected value which is successfully matched;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
Optionally, invoking each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed, and further including:
Determining the rest working threads except the target working thread in all the working threads to be selected;
Suspending the residual working threads through a preset iteration system;
and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
Optionally, the suspending the remaining work threads through a preset iteration system further includes:
and determining the memory space occupied by the residual working threads, and releasing the memory space.
Optionally, the querying, according to each data read-write request, a hash bucket to be processed corresponding to each working thread in a preset lock-free hash table includes:
extracting the query code contained in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
Optionally, the data sharing method further includes:
When a data writing request is received, acquiring data to be written corresponding to the data writing request;
calculating a hash value to be written corresponding to the data to be written through a preset hash model;
Determining a hash bucket to be written according to the hash value to be written;
acquiring a linked list address of the hash bucket to be written;
and inserting the data to be written into the hash bucket to be written according to the linked list address so as to complete data writing.
Optionally, before the data to be written is inserted into the hash bucket to be written according to the linked list address, the method further includes:
acquiring a current load factor of the hash bucket to be written;
Estimating a target load factor after data insertion according to the current load factor;
and when the target load factor is smaller than a preset threshold value, inserting the data to be written into the hash bucket to be written according to the linked list address.
Optionally, the linked list address includes: a bucket head linked list address and a bucket tail linked list address;
the inserting the data to be written into the hash bucket to be written according to the linked list address comprises the following steps:
inserting the data to be written into the barrel head of the hash barrel to be written through a barrel head insertion strategy according to the barrel head linked list address; or (b)
And inserting the data to be written into the tail of the hash bucket to be written through a tail inserting strategy according to the tail linked list address.
Optionally, after the target load factor is inserted according to the current load factor estimated data, the method further includes:
When the target load factor is greater than or equal to a preset threshold value, reading an array mapping relation and stored data of a preset lock-free hash table;
and inserting the storage data into a capacity-expansion lock-free hash table based on a preset head insertion strategy and the array mapping relation, wherein the chain table length of the capacity-expansion lock-free hash table is twice that of the preset lock-free hash table.
In addition, to achieve the above object, the present invention also proposes a data sharing apparatus including:
the query module is used for querying a hash bucket to be processed corresponding to each working thread in a preset lock-free hash table according to each data read-write request when receiving a data read-write request sent by at least one working thread, wherein the preset lock-free hash table comprises a hash value of at least one data to be shared in an advertisement engine system;
The reading module is used for reading the preset memory value of the hash bucket to be processed;
the calling module is used for calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
And the driving module is used for driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
Optionally, the calling module is further configured to determine a pre-stored expected value of each candidate working thread when the plurality of candidate working threads wait for reading and writing the to-be-processed hash bucket;
And calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
Optionally, the calling module is further configured to match each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
Optionally, the calling module is further configured to determine, when the preset memory value and the pre-stored expected value are successfully matched, a target working thread corresponding to the pre-stored expected value that is successfully matched;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
Optionally, the driving module is further configured to determine remaining worker threads of each candidate worker thread except the target worker thread;
Suspending the residual working threads through a preset iteration system;
and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
Optionally, the driving module is further configured to determine a memory space occupied by the remaining working threads, and release the memory space.
Optionally, the query module is further configured to extract a query code included in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
In addition, in order to achieve the above object, the present invention also proposes a data sharing apparatus including: a memory, a processor and a data sharing program stored on the memory and executable on the processor, the data sharing program configured to implement the steps of the data sharing method as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a data sharing program which, when executed by a processor, implements the steps of the data sharing method as described above.
The invention discloses a data sharing method, which comprises the following steps:
When receiving a data read-write request sent by at least one working thread, inquiring a to-be-processed hash bucket corresponding to each working thread in a preset lock-free hash table according to each data read-write request, wherein the preset lock-free hash table comprises hash values of at least one to-be-shared data in an advertisement engine system; reading a preset memory value of the hash bucket to be processed; calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed; and after the preset memory value of the hash bucket to be processed and the corresponding working thread are determined, the working thread under the same hash bucket is called to conduct authority competition according to the preset memory value to determine the target working thread with the bucket operation authority of the hash bucket, and finally the target working thread is driven to conduct data reading and writing on the data on the hash bucket to be processed so as to realize data sharing, thereby avoiding the technical problems of low throughput of the data and low efficiency of thread reading data when the advertisement search engine in the prior art performs data sharing.
Drawings
FIG. 1 is a schematic diagram of a data sharing device of a hardware runtime environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a data sharing method according to the present invention;
FIG. 3 is a schematic diagram of a hash table architecture according to an embodiment of the data sharing method of the present invention;
FIG. 4 is a flowchart illustrating a data sharing method according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a multithreading read-write flow according to an embodiment of the data sharing method of the present invention;
FIG. 6 is a flowchart illustrating a third embodiment of a data sharing method according to the present invention;
FIG. 7 is a schematic diagram of a data writing scenario of an embodiment of a data sharing method according to the present invention;
FIG. 8 is a schematic diagram of a hash capacity-expansion scenario according to an embodiment of the data sharing method of the present invention;
fig. 9 is a block diagram of a first embodiment of a data sharing device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a data sharing device structure of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the data sharing apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) or a stable nonvolatile memory (NVM), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 does not constitute a limitation of the data sharing device and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a data sharing program may be included in the memory 1005 as one type of storage medium.
In the data sharing device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the data sharing apparatus of the present invention may be disposed in the data sharing apparatus, and the data sharing apparatus calls the data sharing program stored in the memory 1005 through the processor 1001 and executes the data sharing method provided by the embodiment of the present invention.
An embodiment of the present invention provides a data sharing method, referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the data sharing method of the present invention.
In this embodiment, the data sharing method includes the following steps:
Step S10: when receiving a data read-write request sent by at least one working thread, inquiring a to-be-processed hash bucket corresponding to each working thread in a preset lock-free hash table according to each data read-write request, wherein the preset lock-free hash table comprises a hash value of at least one to-be-shared data in an advertisement engine system.
The execution body of the method of the present embodiment may be a device having functions such as data processing, network communication, and program running, for example: the cloud server or the virtual machine of the advertisement engine system may be other devices capable of realizing the same or similar functions, which is not particularly limited in this embodiment, and in this embodiment and the following embodiments, the power of the cloud server of the advertisement engine system will be described.
It can be understood that when data sharing is needed in the advertisement engine system, the data to be shared is stored in a hash table mode by using the unique advertisement identifier as a key, and when a large number of concurrent threads request to read and write the hash tables in real time, the hash tables can be locked so as to ensure the atomicity and consistency of data operation of each working thread.
Locking the hash table can prevent multiple threads from accessing and modifying shared resources simultaneously, ensure that the hash table is kept stable in concurrent operation, and prevent data damage and uncertainty results.
The traditional locking scheme of shared data is generally global mutual exclusion lock and data slicing mutual exclusion lock, wherein the global mutual exclusion lock can cause intense lock competition, the concurrency is severely limited, and only one thread can read and write a hash table at the same time, so that competition bottleneck is caused, and system throughput is affected; the data slicing mutual exclusion lock is improved on the basis of global mutual exclusion lock competition by performing data slicing on the hash table, N threads are supported to concurrently read and write the hash table at the same time, but the operations of locking and unlocking have overheads such as context switching, and the number of slices is still a constraint condition of concurrent reading and writing of a system.
In order to solve the above problem, in this embodiment, the hash table is split into a plurality of hash buckets, and the granularity of the lock is amortized into each bucket, so that the granularity of the lock is only 1/n of the conventional scheme, n is the number of buckets, the concurrency of the system is improved to the greatest extent, and then the offset position of each bucket head is mapped to the bit array, and the bucket heads are locked in the cas mode based on yield, so that only one working process can read and write the hash buckets at the same time, the memory is reduced to the greatest extent, and the calculation capacity is improved.
It should be noted that, referring to fig. 3, the preset lockless hash table is formed by a plurality of hash buckets in a linked list form, in this embodiment, the same hash value may refer to a plurality of elements or data in the hash table, and the head of each hash bucket is controlled by a bit array lock to ensure that a hash bucket can only be read and written by one working thread at the same time, so as to prevent multiple threads from accessing and modifying shared resources at the same time.
Compared with the traditional global exclusive lock or data slicing exclusive lock technology, the bit number group lock provided by the embodiment does not need to slice the data in the hash table, is not influenced by the slicing number, can support more threads to access the same hash table at the same time, and simultaneously, also ensures that the data in one hash bucket only supports one working thread to access.
Further, the querying the to-be-processed hash bucket corresponding to each working thread in the preset lock-free hash table according to each data read-write request includes:
extracting the query code contained in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
In a specific implementation, when elements or data in the hash table are queried, a mapping relation of a query code in the hash table is utilized to determine a hash bucket corresponding to the query code and data stored in the hash bucket, wherein the query code can be an advertisement unique identifier, a hash value or other identifier information capable of referring to advertisement data, for example: in fig. 3, if the given query code is 3, the working thread uses the hash data corresponding to the hash table as the data stored in the hash bucket corresponding to the third linked list, that is, data3, and performs read-write operation on the data3, so as to realize data sharing of the advertisement engine system.
Step S20: and reading a preset memory value of the hash bucket to be processed.
It can be understood that, in order to ensure that only one working thread reads and writes the hash bucket data, in this embodiment, a memory value is preset for the hash bucket, so that when a plurality of working threads simultaneously send requests for reading and writing, the memory value and an expected value in each working thread can be compared, permission competition can be performed on each working thread, and only one working thread can perform the reading and writing operation at the same time.
Step S30: and calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
It should be noted that, invoking each working thread to perform authority competition on the hash bucket to be processed means that the improved lock-free authority competition model matches the expected value corresponding to each working thread with the preset memory value, if the expected value of a certain working thread is equal to the preset memory value, it is determined that the working thread has the bucket operation authority of the hash bucket to be processed, and the corresponding data processing operation can be performed on the hash bucket to be processed according to the data operation scheme recorded by the working thread.
In addition, in this embodiment, if only one working thread needs to read and write data in the hash table, the corresponding hash bucket in the hash table may be queried according to the key in the data read and write request sent by the working thread, so as to directly read the stored data in the hash bucket, thereby realizing data sharing.
Step S40: and driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
It should be understood that after the target working thread corresponding to the bucket operation authority of the hash bucket to be processed is determined, the target working thread is driven to read or rewrite the data in the hash bucket to be processed, so that the working thread can feed back the data to the home client of the working thread, and data sharing between the advertisement engine system and the client is realized.
For other working threads without bucket operation authority, the next calling instruction can be waited by suspending temporarily, in this process, the cpu and running memory occupied by other working threads can be released, so as to improve the use efficiency of the cpu and the memory, even if a large number of clients and a large number of data sharing requests of the working threads exist, the working threads without bucket operation authority do not occupy the cpu and the running memory, and further the data reading and writing efficiency of the target working threads with bucket operation authority can be improved.
According to the embodiment, when data read-write requests sent by a plurality of working threads are received, a to-be-processed hash bucket corresponding to each working thread is inquired in a preset lock-free hash table through the received data read-write requests, the preset memory value of the to-be-processed hash bucket is read, after the preset memory value of the to-be-processed hash bucket and the corresponding working thread are determined, the working thread under the same hash bucket is called to conduct authority competition on the hash bucket according to the preset memory value so as to determine a target working thread with bucket operation authority of the hash bucket, and finally the target working thread is driven to conduct data read-write on data on the to-be-processed hash bucket so as to achieve data sharing, atomicity of the to-be-shared data is guaranteed, and the technical problems that in the prior art, when an advertisement search engine conducts data sharing, throughput of the data and efficiency of thread read-write data are low are avoided.
Referring to fig. 4, fig. 4 is a flowchart illustrating a data sharing method according to a second embodiment of the present invention.
Based on the first embodiment, in this embodiment, the step S30 includes:
step S301: and when a plurality of candidate working threads wait for reading and writing the hash bucket to be processed, determining the pre-stored expected value of each candidate working thread.
It should be noted that, when there are multiple to-be-selected working threads waiting for reading And writing one to-be-processed hash bucket at the same time, in order to ensure that only one working thread reads And writes the to-be-processed hash bucket at the same time, and ensure the atomicity of data processing And the consistency of data, bit number group locks are set through each hash bucket, and authority competition is performed on each to-be-read And written working thread through an improved lock-free authority competition model so as to determine the working thread finally having bucket operation authority, wherein the improved lock-free authority competition model can be an improved authority competition model based on a comparison And exchange algorithm (complex And Swap, CAS).
Compared with the traditional permission competition model, when a plurality of threads try to read and write one data, only one thread can run, other threads which cannot obtain the read and write permission are not suspended but wait to be called again, and in the process, each thread occupies a certain cpu and memory, and when the number of requested read and write is too large, the data throughput is affected by the clamping.
In a specific implementation, a working thread correspondingly stores a pre-stored expected value, the pre-stored expected values of all the working threads can be the same, when the pre-stored expected values of all the working threads are matched with the pre-stored memory values of the hash bucket to be processed, sequential matching can be performed according to the serial numbers, time sequences or letter sequences of all the working threads in a server, and if the pre-stored expected value of one working thread is the same as the pre-stored memory value of the hash bucket to be processed, the subsequent permission competition is stopped.
Step S302: and calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
It will be appreciated that the bucket operating rights of the hash bucket to be processed can only be owned by one worker thread at the same time.
Further, the step of calling each candidate working thread to perform authority competition on the hash bucket to be processed according to each pre-stored expected value and the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed includes:
Matching each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
In a specific implementation, taking a preset memory value as V, taking a preset expected value of each working thread as a= { A1, A2, A3, A4} as an example, if the preset expected value a= { A1, A2, A3, A4} is matched with the preset memory value V of the hash bucket to be processed through an improved lock-free authority competition model, if the value A1 is the same as the value V, the authority competition of the working thread corresponding to the A1 is successful, and the bucket operation processing authority of the hash bucket to be processed is provided, at this time, the preset memory value can be read and written according to the working thread corresponding to the A1, and meanwhile, the working thread corresponding to the A2, the A3 and the A4 can be temporarily suspended, and the cpu and the memory occupied by the corresponding working thread are released, so that the load of a server is reduced.
Further, the determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, includes:
When the preset memory value is successfully matched with the pre-stored expected value, determining a target working thread corresponding to the pre-stored expected value which is successfully matched;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
It can be understood that, referring to fig. 5, fig. 5 is a schematic diagram of a scenario in which multiple threads request to read and write a hash table at the same time in this embodiment, after determining the attribution of the bucket operation authority of a certain hash bucket to be processed, the flag bit of the bit number group lock of the hash bucket to be processed may be modified, so as to avoid performing authority competition again, ensure that only one working thread can read and write data in the hash bucket, and simultaneously, after updating the flag bit, transfer the bucket operation authority of the hash bucket to be processed to the target working thread, so that the target working thread can read and write the data in the hash bucket.
In addition, in this embodiment, after the step S30, the method further includes:
step S310: and determining the rest working threads except the target working thread in the candidate working threads.
Step S320: and suspending the residual working threads through a preset iteration system.
Step S330: and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
It should be understood that the preset iteration system refers to a yield system, and the yield system is mainly used for calling the remaining working threads failed in the last authority competition again after the target working threads successfully read data, and performing the authority competition again on the hash bucket, so that each working thread can read data, and each client can realize sharing on the premise of ensuring the atomicity and consistency of the data read by a single thread.
It should be noted that suspending the remaining worker threads refers to determining a memory space occupied by the remaining worker threads and freeing the memory space, where the memory space includes, but is not limited to, occupied cpu, running memory, and displaying performance parameters such as memory.
In this embodiment, when a plurality of candidate working threads wait for reading and writing the hash bucket to be processed, a pre-stored expected value of each candidate working thread is determined; and calling each to-be-selected working thread to perform authority competition on the to-be-processed hash bucket according to each pre-stored expected value and the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the to-be-processed hash bucket, specifically calling each to-be-selected working thread to perform authority competition on the to-be-processed hash bucket according to the expected value in each working thread and the memory value of the hash bucket to determine the target working thread corresponding to the bucket operation authority finally, suspending working threads with other authority competition failures, improving data throughput and guaranteeing the atomicity of data reading and writing.
Referring to fig. 6, fig. 6 is a flowchart illustrating a third embodiment of a data sharing method according to the present invention.
Based on the above second embodiment, in this embodiment, after step S40, the method further includes:
Step S50: when a data writing request is received, obtaining data to be written corresponding to the data writing request.
Step S60: and calculating a hash value to be written corresponding to the data to be written through a preset hash model.
Step S70: and determining a hash bucket to be written according to the hash value to be written.
Step S80: and acquiring the linked list address of the hash bucket to be written.
Step S90: and inserting the data to be written into the hash bucket to be written according to the linked list address so as to complete data writing.
It should be noted that, the preset hash model is used to calculate a hash value of the data to be written, for example: the hash calculation algorithms of MD5, SHA-1, SHA-256, SHA-512, etc., are not particularly limited in this embodiment.
In a specific implementation, referring to fig. 3, fig. 3 is a schematic diagram of a hash table in this embodiment, when determining a hash bucket to be written, a hash algorithm may be used to calculate a corresponding hash value to be written for data to be written, and by matching the hash value to be written with a key in the hash table, a corresponding hash bucket to be written and a linked list address are determined, and the data to be written is inserted into the hash bucket to be written in an inserting manner, so that data writing is completed.
Further, before the data to be written is inserted into the hash bucket to be written according to the linked list address, the method further includes:
acquiring a current load factor of the hash bucket to be written;
Estimating a target load factor after data insertion according to the current load factor;
and when the target load factor is smaller than a preset threshold value, inserting the data to be written into the hash bucket to be written according to the linked list address.
It should be understood that, because the hash chain table has limited data storage capacity, in order to avoid that the hash chain table length after data writing is smaller than the set length, the target load factor after data insertion may be estimated based on the current load factor to determine whether the target load factor is smaller than a preset threshold, where the preset threshold is generally the product of the maximum hash length and a preset coefficient, and the preset coefficient may be 0.75, or other constants smaller than 1, to leave a certain redundant space for the hash.
Further, the linked list address includes: a bucket head linked list address and a bucket tail linked list address;
the inserting the data to be written into the hash bucket to be written according to the linked list address comprises the following steps:
inserting the data to be written into the barrel head of the hash barrel to be written through a barrel head insertion strategy according to the barrel head linked list address; or (b)
And inserting the data to be written into the tail of the hash bucket to be written through a tail inserting strategy according to the tail linked list address.
It can be understood that, when data insertion is performed, referring to fig. 7, fig. 7 is a schematic diagram of two data insertion positions, after determining a linked list array stored in a hash bucket, data to be inserted may be inserted according to a bucket head linked list address or a bucket tail linked list address, where the bucket head linked list address is also referred to as a hash value of a bucket head node, and the bucket tail linked list address is also referred to as a hash value of a bucket tail node.
Further, after the target load factor is inserted according to the current load factor estimated data, the method further includes:
When the target load factor is greater than or equal to a preset threshold value, reading an array mapping relation and stored data of a preset lock-free hash table;
and inserting the storage data into a capacity-expansion lock-free hash table based on a preset head insertion strategy and the array mapping relation, wherein the chain table length of the capacity-expansion lock-free hash table is twice that of the preset lock-free hash table.
In a specific implementation, when the target load factor is greater than or equal to the preset threshold, the expansion of the linked list array in the hash bucket may be to create a node array (expansion non-locking hash table), and the length of the array is the power of 2 of the length of the linked list array in the original hash bucket, and the specific expansion mode may be to insert the whole of the original linked list array into the hash table with a new value, referring to fig. 8, and fig. 8 is a length schematic diagram of the expansion non-locking hash table in this embodiment.
According to the embodiment, when data writing is needed, the hash value of the data to be written is calculated, then the corresponding hash bucket to be written is queried, the corresponding insertion mode is selected according to the bucket head linked list address or the bucket tail linked list address of the hash bucket to be written, further data writing is achieved, and when the capacity of the hash bucket after data writing is insufficient, capacity expansion can be carried out, and the data writing requirement of a user is met.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a data sharing program, and the data sharing program realizes the steps of the data sharing method when being executed by a processor.
Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Referring to fig. 5, fig. 5 is a block diagram illustrating a first embodiment of a data sharing apparatus according to the present invention.
As shown in fig. 5, the data sharing device provided in the embodiment of the present invention includes:
The query module 10 is configured to query, when receiving a data read-write request sent by at least one working thread, a hash bucket to be processed corresponding to each working thread in a preset lockless hash table according to each data read-write request, where the preset lockless hash table includes a hash value of at least one data to be shared in the advertisement engine system.
And the reading module 20 is used for reading the preset memory value of the hash bucket to be processed.
And the calling module 30 is used for calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
And the driving module 40 is used for driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
In an embodiment, the calling module 30 is further configured to determine a pre-stored expected value of each candidate working thread when the plurality of candidate working threads wait to read and write the pending hash bucket; and calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
In an embodiment, the calling module 30 is further configured to match each pre-stored expected value with the preset memory value through an improved lock-free authority competition model; and determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
In an embodiment, the invoking module 30 is further configured to determine, when the preset memory value matches the pre-stored expected value successfully, a target working thread corresponding to the pre-stored expected value that matches successfully; updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model; and after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
In an embodiment, the calling module 30 is further configured to determine remaining worker threads of each candidate worker thread except the target worker thread; suspending the residual working threads through a preset iteration system; and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
In an embodiment, the calling module 30 is further configured to determine a memory space occupied by the remaining worker threads, and release the memory space.
In an embodiment, the query module 10 is further configured to extract a query code included in each data read-write request; inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers; and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
In an embodiment, the driving module 40 is further configured to, when receiving a data writing request, obtain data to be written corresponding to the data writing request; calculating a hash value to be written corresponding to the data to be written through a preset hash model; determining a hash bucket to be written according to the hash value to be written; acquiring a linked list address of the hash bucket to be written; and inserting the data to be written into the hash bucket to be written according to the linked list address so as to complete data writing.
In an embodiment, the driving module 40 is further configured to obtain a current load factor of the hash bucket to be written; estimating a target load factor after data insertion according to the current load factor; and when the target load factor is smaller than a preset threshold value, inserting the data to be written into the hash bucket to be written according to the linked list address.
In an embodiment, the driving module 40 is further configured to insert the data to be written into the bucket head of the hash bucket through a bucket head insertion policy according to the bucket head linked list address; or inserting the data to be written into the tail of the hash bucket to be written through a tail inserting strategy according to the tail linked list address.
In an embodiment, the driving module 40 is further configured to read an array mapping relationship and stored data of a preset lockless hash table when the target load factor is greater than or equal to a preset threshold; and inserting the storage data into a capacity-expansion lock-free hash table based on a preset head insertion strategy and the array mapping relation, wherein the chain table length of the capacity-expansion lock-free hash table is twice that of the preset lock-free hash table.
According to the embodiment, when data read-write requests sent by a plurality of working threads are received, a to-be-processed hash bucket corresponding to each working thread is inquired in a preset lock-free hash table through the received data read-write requests, the preset memory value of the to-be-processed hash bucket is read, after the preset memory value of the to-be-processed hash bucket and the corresponding working thread are determined, the working thread under the same hash bucket is called to conduct authority competition on the hash bucket according to the preset memory value so as to determine a target working thread with bucket operation authority of the hash bucket, and finally the target working thread is driven to conduct data read-write on data on the to-be-processed hash bucket so as to achieve data sharing, atomicity of the to-be-shared data is guaranteed, and the technical problems that in the prior art, when an advertisement search engine conducts data sharing, throughput of the data and efficiency of thread read-write data are low are avoided.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in this embodiment may refer to the data sharing method provided in any embodiment of the present invention, and are not described herein again.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of embodiments, it will be clear to a person skilled in the art that the above embodiment method may be implemented by means of software plus a necessary general hardware platform, but may of course also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
The invention discloses A1, a data sharing method, which comprises the following steps:
when receiving a data read-write request sent by at least one working thread, inquiring a to-be-processed hash bucket corresponding to each working thread in a preset lock-free hash table according to each data read-write request, wherein the preset lock-free hash table comprises hash values of at least one to-be-shared data in an advertisement engine system;
reading a preset memory value of the hash bucket to be processed;
Calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
and driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
A2, the data sharing method as described in A1, wherein invoking each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value, to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed, includes:
when a plurality of candidate working threads wait to read and write the hash bucket to be processed, determining a pre-stored expected value of each candidate working thread;
And calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
A3, invoking each standby working thread to perform authority competition on the hash bucket to be processed according to each pre-stored expected value and the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed, wherein the method comprises the following steps:
Matching each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
A4, determining a target working thread according to a matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the data sharing method as described in A3 comprises the following steps:
When the preset memory value is successfully matched with the pre-stored expected value, determining a target working thread corresponding to the pre-stored expected value which is successfully matched;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
A5, the data sharing method according to A1, wherein after calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain the target working thread corresponding to the bucket operation authority of the hash bucket to be processed, further comprises:
Determining the rest working threads except the target working thread in all the working threads to be selected;
Suspending the residual working threads through a preset iteration system;
and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
A6, the data sharing method according to A5, wherein the suspending the remaining work threads through a preset iteration system, further comprises:
and determining the memory space occupied by the residual working threads, and releasing the memory space.
A7, the data sharing method according to any one of A1-A6, wherein the querying the to-be-processed hash bucket corresponding to each working thread in the preset lock-free hash table according to each data read-write request includes:
extracting the query code contained in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
A8. the data sharing method according to any one of A1 to A6, the data sharing method further comprising:
When a data writing request is received, acquiring data to be written corresponding to the data writing request;
calculating a hash value to be written corresponding to the data to be written through a preset hash model;
Determining a hash bucket to be written according to the hash value to be written;
acquiring a linked list address of the hash bucket to be written;
and inserting the data to be written into the hash bucket to be written according to the linked list address so as to complete data writing.
A9, the data sharing method according to A8, before the data to be written is inserted into the hash bucket to be written according to the linked list address, further includes:
acquiring a current load factor of the hash bucket to be written;
Estimating a target load factor after data insertion according to the current load factor;
and when the target load factor is smaller than a preset threshold value, inserting the data to be written into the hash bucket to be written according to the linked list address.
A10, the data sharing method as described in A9, wherein the linked list address comprises: a bucket head linked list address and a bucket tail linked list address;
the inserting the data to be written into the hash bucket to be written according to the linked list address comprises the following steps:
inserting the data to be written into the barrel head of the hash barrel to be written through a barrel head insertion strategy according to the barrel head linked list address; or (b)
And inserting the data to be written into the tail of the hash bucket to be written through a tail inserting strategy according to the tail linked list address.
A11, the data sharing method according to A9, after estimating the target load factor after the data insertion according to the current load factor, further includes:
When the target load factor is greater than or equal to a preset threshold value, reading an array mapping relation and stored data of a preset lock-free hash table;
and inserting the storage data into a capacity-expansion lock-free hash table based on a preset head insertion strategy and the array mapping relation, wherein the chain table length of the capacity-expansion lock-free hash table is twice that of the preset lock-free hash table.
The invention also discloses a B12 and a data sharing device, wherein the data sharing device comprises:
the query module is used for querying a hash bucket to be processed corresponding to each working thread in a preset lock-free hash table according to each data read-write request when receiving a data read-write request sent by at least one working thread, wherein the preset lock-free hash table comprises a hash value of at least one data to be shared in an advertisement engine system;
The reading module is used for reading the preset memory value of the hash bucket to be processed;
the calling module is used for calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
And the driving module is used for driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
B13, the data sharing device as described in B12, wherein the calling module is further configured to determine a pre-stored expected value of each candidate working thread when the plurality of candidate working threads wait to read and write the pending hash bucket;
And calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
The data sharing device as described in B14, wherein the calling module is further configured to match each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
B15, the data sharing device as described in B14, wherein the calling module is further configured to determine a target working thread corresponding to a pre-stored expected value that is successfully matched when the pre-set memory value is successfully matched with the pre-stored expected value;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
B16, the data sharing device as described in B12, the driving module further configured to determine remaining worker threads of each candidate worker thread except the target worker thread;
Suspending the residual working threads through a preset iteration system;
and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
The data sharing device as described in B17, wherein the driving module is further configured to determine a memory space occupied by the remaining worker threads, and release the memory space.
B18, the data sharing device as described in B12, wherein the query module is further configured to extract a query code included in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
The invention also discloses C19, a data sharing device, the data sharing device includes: a memory, a processor, and a data sharing program stored on the memory and executable on the processor, the data sharing program configured to implement the data sharing method as described above.
The invention also discloses D20, a storage medium, the storage medium stores a data sharing program, and the data sharing program realizes the data sharing method when being executed by a processor.

Claims (10)

1. A data sharing method, characterized in that the data sharing method comprises:
when receiving a data read-write request sent by at least one working thread, inquiring a to-be-processed hash bucket corresponding to each working thread in a preset lock-free hash table according to each data read-write request, wherein the preset lock-free hash table comprises hash values of at least one to-be-shared data in an advertisement engine system;
reading a preset memory value of the hash bucket to be processed;
Calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
and driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
2. The data sharing method as claimed in claim 1, wherein the calling each worker thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target worker thread corresponding to a bucket operation authority of the hash bucket to be processed includes:
when a plurality of candidate working threads wait to read and write the hash bucket to be processed, determining a pre-stored expected value of each candidate working thread;
And calling each working thread to be selected according to each pre-stored expected value and the preset memory value to perform authority competition on the hash bucket to be processed, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
3. The data sharing method as claimed in claim 2, wherein the calling each candidate working thread to perform authority competition on the hash bucket according to each pre-stored expected value and the pre-set memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket includes:
Matching each pre-stored expected value with the preset memory value through an improved lock-free authority competition model;
And determining a target working thread according to the matching result, and transferring the bucket operation authority of the hash bucket to be processed to the target working thread, wherein the target working thread is a working thread with a prestored expected value equal to a preset memory value of the hash bucket to be processed.
4. The data sharing method as claimed in claim 3, wherein the determining a target worker thread according to the matching result and transferring the bucket operation authority of the hash bucket to be processed to the target worker thread comprises:
When the preset memory value is successfully matched with the pre-stored expected value, determining a target working thread corresponding to the pre-stored expected value which is successfully matched;
updating the bit number group lock flag bit of the hash bucket to be processed through the improved lock-free authority competition model;
And after the flag bit is updated, transferring the bucket operation authority of the hash bucket to be processed to the target working thread.
5. The data sharing method as claimed in claim 1, wherein after the invoking each worker thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain the target worker thread corresponding to the bucket operation authority of the hash bucket to be processed, further comprises:
Determining the rest working threads except the target working thread in all the working threads to be selected;
Suspending the residual working threads through a preset iteration system;
and returning to the step of calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value when the rest working threads are called again, so as to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed.
6. The data sharing method as claimed in claim 5, wherein suspending the remaining worker threads through a preset iteration system further comprises:
and determining the memory space occupied by the residual working threads, and releasing the memory space.
7. The method for sharing data according to any one of claims 1 to 6, wherein querying a hash bucket to be processed corresponding to each working thread in a preset lock-free hash table according to each data read-write request includes:
extracting the query code contained in each data read-write request;
inquiring unique advertisement identifiers corresponding to inquiry codes in a preset lock-free hash table, and determining a hash bucket containing the unique advertisement identifiers;
and determining the hash bucket to be processed corresponding to each working thread according to the mapping relation between the query code and the hash bucket.
8. A data sharing apparatus, characterized in that the data sharing apparatus comprises:
the query module is used for querying a hash bucket to be processed corresponding to each working thread in a preset lock-free hash table according to each data read-write request when receiving a data read-write request sent by at least one working thread, wherein the preset lock-free hash table comprises a hash value of at least one data to be shared in an advertisement engine system;
The reading module is used for reading the preset memory value of the hash bucket to be processed;
the calling module is used for calling each working thread to perform authority competition on the hash bucket to be processed based on the preset memory value to obtain a target working thread corresponding to the bucket operation authority of the hash bucket to be processed;
And the driving module is used for driving the target working thread to read and write the data in the hash bucket to be processed so as to realize data sharing.
9. A data sharing device, characterized in that the data sharing device comprises: a memory, a processor, and a data sharing program stored on the memory and executable on the processor, the data sharing program configured to implement the data sharing method of any one of claims 1 to 7.
10. A storage medium having stored thereon a data sharing program which when executed by a processor implements the data sharing method of any of claims 1 to 7.
CN202410071214.1A 2024-01-17 2024-01-17 Data sharing method, device, equipment and storage medium Pending CN117891625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410071214.1A CN117891625A (en) 2024-01-17 2024-01-17 Data sharing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410071214.1A CN117891625A (en) 2024-01-17 2024-01-17 Data sharing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117891625A true CN117891625A (en) 2024-04-16

Family

ID=90648805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410071214.1A Pending CN117891625A (en) 2024-01-17 2024-01-17 Data sharing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117891625A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252386A (en) * 2013-06-26 2014-12-31 阿里巴巴集团控股有限公司 Data update locking method and equipment
US20160110403A1 (en) * 2014-10-19 2016-04-21 Microsoft Corporation High performance transactions in database management systems
CN107992577A (en) * 2017-12-04 2018-05-04 北京奇安信科技有限公司 A kind of Hash table data conflict processing method and device
US10282307B1 (en) * 2017-12-22 2019-05-07 Dropbox, Inc. Lock-free shared hash map
CN112612419A (en) * 2020-12-25 2021-04-06 西安交通大学 Data storage structure, storage method, reading method, equipment and medium of NVM (non-volatile memory)

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104252386A (en) * 2013-06-26 2014-12-31 阿里巴巴集团控股有限公司 Data update locking method and equipment
US20160110403A1 (en) * 2014-10-19 2016-04-21 Microsoft Corporation High performance transactions in database management systems
CN107992577A (en) * 2017-12-04 2018-05-04 北京奇安信科技有限公司 A kind of Hash table data conflict processing method and device
US10282307B1 (en) * 2017-12-22 2019-05-07 Dropbox, Inc. Lock-free shared hash map
CN112612419A (en) * 2020-12-25 2021-04-06 西安交通大学 Data storage structure, storage method, reading method, equipment and medium of NVM (non-volatile memory)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNCHANG WANG等: "DHash: Dynamic Hash Tables With Non-Blocking Regular Operations", 《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》, 14 February 2022 (2022-02-14), pages 3274 - 3290 *
陈志文: "多核系统并发哈希表研究", 《中国博士学位论文全文数据库 信息科技辑》, 15 June 2018 (2018-06-15), pages 137 - 7 *

Similar Documents

Publication Publication Date Title
Ashkiani et al. A dynamic hash table for the GPU
US9405574B2 (en) System and method for transmitting complex structures based on a shared memory queue
US9251162B2 (en) Secure storage management system and method
US11269772B2 (en) Persistent memory storage engine device based on log structure and control method thereof
US11204813B2 (en) System and method for multidimensional search with a resource pool in a computing environment
US9767019B2 (en) Pauseless garbage collector write barrier
US11113316B2 (en) Localized data affinity system and hybrid method
CN101841473B (en) Method and apparatus for updating MAC (Media Access Control) address table
US11983159B2 (en) Systems and methods for management of a log-structure
KR101268437B1 (en) Method and system for maintaining consistency of a cache memory by multiple independent processes
US20060224949A1 (en) Exclusion control method and information processing apparatus
CN117891625A (en) Data sharing method, device, equipment and storage medium
CN111639076A (en) Cross-platform efficient key value storage method
CN113986775B (en) Page table item generation method, system and device in RISC-V CPU verification
CN112800057B (en) Fingerprint table management method and device
CN114489480A (en) Method and system for high-concurrency data storage
CN112800123A (en) Data processing method, data processing device, computer equipment and storage medium
US11874767B2 (en) Memory partitions for processing entities
KR100570731B1 (en) An Enhanced Second Chance Method for Selecting a Victim Buffer Page in a Multi-User Storage System
CN115951844B (en) File lock management method, equipment and medium of distributed file system
CN109766185B (en) Routing table item processing method and device
US11537431B1 (en) Task contention reduction via policy-based selection
Zhang et al. Reducing aborts in distributed transactional systems through dependency detection
CN114138728A (en) Method, device, equipment and storage medium for modifying shared file content
CN117743434A (en) Batch data warehousing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination