CN108170758A - High concurrent date storage method and computer readable storage medium - Google Patents
High concurrent date storage method and computer readable storage medium Download PDFInfo
- Publication number
- CN108170758A CN108170758A CN201711406104.2A CN201711406104A CN108170758A CN 108170758 A CN108170758 A CN 108170758A CN 201711406104 A CN201711406104 A CN 201711406104A CN 108170758 A CN108170758 A CN 108170758A
- Authority
- CN
- China
- Prior art keywords
- buffering area
- consumed
- queue
- buffer queue
- thread pool
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2291—User-Defined Types; Storage management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of high concurrent date storage methods and computer readable storage medium, method to include:It creates multiple without the concurrent queue buffer of lock;Create free buffer queue and buffer queue to be consumed;A buffering area is chosen from free buffer queue as write-in buffering area;Write-in buffering area is checked according to preset cycle time, if there is data write-in, write-in buffering area is added in buffer queue to be consumed;It continues to execute and a step of buffering area is as write-in buffering area is chosen from free buffer queue;Buffering area in buffer queue to be consumed is added in thread pool queue;Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.The present invention can clearly obtain the loading condition of system, data production and consumption speed, while system resource can be saved.
Description
Technical field
The present invention relates to technical field of data storage more particularly to a kind of high concurrent date storage methods and computer-readable
Storage medium.
Background technology
Existing high concurrent data storage scheme is come data cached using concurrent queue.Data are carried out for concurrent queue
Fragment, stripping strategy here can divide the data in queue according to the characteristic of time or major key (such as modulus, hash).
Thread pool is transferred to carry out multithreading consumption the data after fragment again.The meaning of consumption is:Downstream components are transferred data to, than
Such as send data to database purchase or such as Kafka message-oriented middlewares.
But said program has the following disadvantages:
1st, single high concurrent queue can not explicitly get the loading condition of system, the production and consumption speed of data
Degree.
2nd, to the Fragmentation of data queue, no matter using which kind of stripping strategy, will all certain resource be consumed;Such as basis
The major key of data carries out hash, just must carry out hash operations to the major key of all data.
Invention content
The technical problems to be solved by the invention are:A kind of high concurrent date storage method and computer-readable storage are provided
Medium can clearly obtain the loading condition of system, the production and consumption speed of data, while can save system resource.
In order to solve the above-mentioned technical problem, the technical solution adopted by the present invention is:A kind of high concurrent date storage method, packet
It includes:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store the free time
Buffering area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, by said write buffering area
It adds in buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
The invention further relates to a kind of computer readable storage mediums, are stored thereon with computer program, and described program is located
Reason device realizes following steps when performing:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store the free time
Buffering area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, by said write buffering area
It adds in buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
The beneficial effects of the present invention are:It, can be slow according to the free time using multiple buffering areas being made of high concurrent queue
It rushes area and judges the loading condition of system with the buffering area to be consumed of data;Pass through setting and system highest load capacity phase
When number, can reflect the current load capacity of system by the data of freebuf, the free time can also be passed through
The situation of change of buffering area reflects the speed of data production and consumption;Meanwhile by the way that data to be write direct to different bufferings
Area without Fragmentation, saves system resource.
Description of the drawings
Fig. 1 is a kind of flow chart of high concurrent date storage method of the present invention;
Fig. 2 is the method flow diagram of the embodiment of the present invention one.
Specific embodiment
For the technology contents that the present invention will be described in detail, the objects and the effects, below in conjunction with embodiment and coordinate attached
Figure is explained in detail.
The design of most critical of the present invention is:Buffering area is realized using CAS algorithms, passes through scheduling so that the write-in of data
Constantly switch between multiple buffering area, there will be the buffering area of write-in data, data consumption, dump are carried out by thread pool;Often
The data consumption of a buffering area, storage are completed by single thread.
Referring to Fig. 1, a kind of high concurrent date storage method, including:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store the free time
Buffering area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, by said write buffering area
It adds in buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
As can be seen from the above description, the beneficial effects of the present invention are:The life of the loading condition, data of system can clearly be obtained
Production and consumption rate, while system resource can be saved.
Further, described " adding in the buffering area in buffer queue to be consumed to thread pool queue " is specially:
Monitor buffer queue to be consumed;
If buffer queue to be consumed is not sky, the buffering area in buffer queue to be consumed is added in thread pool team
Row.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " be specially:
A thread in thread pool takes out a buffering area from the thread pool queue, and by the number in a buffering area
According in batch updating to database or batch be transmitted.
Seen from the above description, per thread consumes an independent buffering area, avoids deblocking, the competing lock of multithreading
Problem;In the case where ensureing data consistency, performance is improved.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " after, further comprise:
Buffering area after having consumed is added in free buffer queue.
The invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, described program quilt
Processor realizes following steps when performing:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store the free time
Buffering area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, by said write buffering area
It adds in buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
Further, described " adding in the buffering area in buffer queue to be consumed to thread pool queue " is specially:
Monitor buffer queue to be consumed;
If buffer queue to be consumed is not sky, the buffering area in buffer queue to be consumed is added in thread pool team
Row.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " be specially:
A thread in thread pool takes out a buffering area from the thread pool queue, and by the number in a buffering area
According in batch updating to database or batch be transmitted.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " after, further comprise:
Buffering area after having consumed is added in free buffer queue.
Embodiment one
Fig. 2 is please referred to, the embodiment of the present invention one is:A kind of high concurrent date storage method, includes the following steps:
S1:It creates multiple without the concurrent queue buffer of lock;Queue concurrent to Java can be passed through
ConcurrentLinkedQueue is packaged, and realizes buffering area.
S2:Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store sky
The buffering area of not busy buffering area, i.e. no data, the buffer queue to be consumed for storing buffering area to be consumed, i.e., with
The buffering area of data.
S3:A buffering area is chosen from the free buffer queue as write-in buffering area;
S4:Said write buffering area is checked according to preset cycle time, data write-in is judged whether there is, if so, holding
Row step S5.
S5:Said write buffering area is added in buffer queue to be consumed;It is checked at regular intervals between i.e. current
Buffering area is written, it is if there is data write-in, the Status Change that buffering area is written is to be consumed, and the buffering area is put into be consumed
Buffer queue.Buffering area in buffer queue to be consumed will transfer to thread pool to handle, that is, perform step S6.Meanwhile
Continue to choose a buffering area from free buffer queue as write-in buffering area, that is, perform step S3.
S6:Buffering area in buffer queue to be consumed is added in thread pool queue;Specifically, it can wait to disappear by monitoring
Take buffer queue realization, when buffer queue to be consumed is not empty, then add the buffering area in buffer queue to be consumed
Enter to thread pool queue, consumed.
S7:Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.Specifically, line
A thread in Cheng Chi takes out a buffering area from the thread pool queue, and the batch data in a buffering area is updated
Into database or batch is transmitted.That is, per thread batch consumes the data in a buffering area:Such as by data
It is converted into corresponding mapping object (Mapping Object) batch updating to database;It is converted into corresponding event
(Event), pass through log component Flume bulk transfers;Corresponding theme (topic) is converted into, passes through message-oriented middleware Kafka
Bulk transfer etc.;Data are handled according to specific business.
S8:Buffering area after having consumed is added in free buffer queue.After the completion of data storage, by empty buffering area
Put back to free buffer queue.
The present embodiment uses multiple buffering areas being made of high concurrent queue, can be according to freebuf and with data
Buffering area to be consumed judge the loading condition of system.System is beforehand through test, setting and system highest load capacity phase
When number, the current load capacity of freebuf quantity reflection system, freebuf is more, then system is negative
It carries fewer.The situation of change of freebuf then reflects the speed of data production and consumption, if freebuf constantly subtracts
Lack, then consumption rate backwardness and speed of production.
The quantity of freebuf embodies the ability and loading condition of server-side data consumption.System can be slow to the free time
It rushes sector's row to be monitored, degraded capability is provided for service.With the extension of professional ability, freebuf number can be passed through
Amount monitors the loading condition of entire service cluster, carries out horizontal extension suitable when, increases clustered node, adjust node
The optimizations such as parameter.
Meanwhile the present embodiment, without Fragmentation, saves system money using data to be write direct to different buffering areas
Source.
Embodiment two
The present embodiment is a concrete application scene of embodiment one.
In data acquisition service, the SDK in multiple client constantly has user behavior data to be transferred to clothes by http
Business end.After server-side data reception module receives data, Buffer object is obtained to buffer pool, will be returned to for the first time by obtaining by one
New buffering area.After data reception module gets buffering area, buffering area is write data into.
After having crossed 3s, dispatching and monitoring device finds that buffering area has write-in data, is replaced with idle buffering area with data
Buffering area, and buffer queue to be consumed will be put into the buffering area of data.If at this point, data reception module acquisition request
Buffering area will return to the freebuf of replacement.
Consumption monitor, which is monitored in buffer queue to be consumed, has Buffer object, and Buffer object is put into thread
Batch processing is carried out in pool queue.After the completion of data all store, empty buffering area is put back into free buffer queue.
The quantity of freebuf embodies data production-consuming capacity of service and the load of current service.With industry
The extension of business can carry out horizontal extension to service, increase Service Instance, the handling capacity of server-side be improved by load balancing.
Embodiment three
The present embodiment is a kind of computer readable storage medium of corresponding above-described embodiment, is stored thereon with computer journey
Sequence realizes following steps when described program is executed by processor:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store the free time
Buffering area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, by said write buffering area
It adds in buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
Further, described " adding in the buffering area in buffer queue to be consumed to thread pool queue " is specially:
Monitor buffer queue to be consumed;
If buffer queue to be consumed is not sky, the buffering area in buffer queue to be consumed is added in thread pool team
Row.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " be specially:
A thread in thread pool takes out a buffering area from the thread pool queue, and by the number in a buffering area
According in batch updating to database or batch be transmitted.
It is further, described that " thread in thread pool takes out a buffering area from the thread pool queue successively and disappears
Take " after, further comprise:
Buffering area after having consumed is added in free buffer queue.
In conclusion a kind of high concurrent date storage method provided by the invention and computer readable storage medium, use
Multiple buffering areas being made of high concurrent queue can judge according to freebuf and with the buffering area to be consumed of data
The loading condition of system;By setting and the comparable number of system highest load capacity, freebuf can be passed through
The current load capacity of data reflection system, freebuf is more, then the load of system is fewer;It can also be slow by the free time
The situation of change for rushing area reflects the speed of data production and consumption, if freebuf is constantly reduced, consumption rate is fallen
Afterwards with speed of production;Meanwhile by the way that data to be write direct to different buffering areas, without Fragmentation, system resource is saved.
The foregoing is merely the embodiment of the present invention, are not intended to limit the scope of the invention, every to utilize this hair
The equivalents that bright specification and accompanying drawing content are made directly or indirectly are used in relevant technical field, similarly include
In the scope of patent protection of the present invention.
Claims (8)
1. a kind of high concurrent date storage method, which is characterized in that including:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store idle buffering
Area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, said write buffering area is added in
To buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
2. high concurrent date storage method according to claim 1, which is characterized in that described " by buffering sector to be consumed
Buffering area in row is added in thread pool queue " be specially:
Monitor buffer queue to be consumed;
If buffer queue to be consumed is not sky, the buffering area in buffer queue to be consumed is added in thread pool queue.
3. high concurrent date storage method according to claim 1, which is characterized in that it is described " thread in thread pool according to
It is secondary to take out a buffering area from the thread pool queue and consumed " be specially:
A thread in thread pool takes out a buffering area from the thread pool queue, and the data in a buffering area are criticized
In amount update to database or batch is transmitted.
4. high concurrent date storage method according to claim 1, which is characterized in that it is described " thread in thread pool according to
It is secondary to take out a buffering area from the thread pool queue and consumed " after, further comprise:
Buffering area after having consumed is added in free buffer queue.
5. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that described program is by processor
Following steps are realized during execution:
It creates multiple without the concurrent queue buffer of lock;
Free buffer queue and buffer queue to be consumed are created, the free buffer queue is used to store idle buffering
Area, the buffer queue to be consumed are used to store buffering area to be consumed;
A buffering area is chosen from the free buffer queue as write-in buffering area;
Said write buffering area is checked according to preset cycle time, if there is data write-in, said write buffering area is added in
To buffer queue to be consumed;
It continues to execute a buffering area of being chosen from the free buffer queue and is used as the step of buffering area is written;
Buffering area in buffer queue to be consumed is added in thread pool queue;
Thread in thread pool takes out a buffering area from the thread pool queue successively and is consumed.
6. computer readable storage medium according to claim 5, which is characterized in that described " by buffering sector to be consumed
Buffering area in row is added in thread pool queue " be specially:
Monitor buffer queue to be consumed;
If buffer queue to be consumed is not sky, the buffering area in buffer queue to be consumed is added in thread pool queue.
7. computer readable storage medium according to claim 5, which is characterized in that it is described " thread in thread pool according to
It is secondary to take out a buffering area from the thread pool queue and consumed " be specially:
A thread in thread pool takes out a buffering area from the thread pool queue, and the data in a buffering area are criticized
In amount update to database or batch is transmitted.
8. computer readable storage medium according to claim 5, which is characterized in that it is described " thread in thread pool according to
It is secondary to take out a buffering area from the thread pool queue and consumed " after, further comprise:
Buffering area after having consumed is added in free buffer queue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711406104.2A CN108170758A (en) | 2017-12-22 | 2017-12-22 | High concurrent date storage method and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711406104.2A CN108170758A (en) | 2017-12-22 | 2017-12-22 | High concurrent date storage method and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108170758A true CN108170758A (en) | 2018-06-15 |
Family
ID=62523439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711406104.2A Pending CN108170758A (en) | 2017-12-22 | 2017-12-22 | High concurrent date storage method and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108170758A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110275918A (en) * | 2019-06-17 | 2019-09-24 | 浙江百应科技有限公司 | A kind of million rank excel data quick and stable import systems |
CN111338583A (en) * | 2020-05-19 | 2020-06-26 | 北京数字绿土科技有限公司 | High-frequency data storage method, structure and computer |
CN111767339A (en) * | 2020-05-11 | 2020-10-13 | 北京奇艺世纪科技有限公司 | Data synchronization method and device, electronic equipment and storage medium |
CN112527844A (en) * | 2020-12-22 | 2021-03-19 | 北京明朝万达科技股份有限公司 | Data processing method and device and database architecture |
CN112764673A (en) * | 2020-12-28 | 2021-05-07 | 中国测绘科学研究院 | Storage rate optimization method and device, computer equipment and storage medium |
CN113311994A (en) * | 2021-04-09 | 2021-08-27 | 中企云链(北京)金融信息服务有限公司 | Data caching method based on high concurrency |
CN114579053A (en) * | 2022-03-02 | 2022-06-03 | 统信软件技术有限公司 | Data reading and writing method and device, computing equipment and storage medium |
WO2022142157A1 (en) * | 2020-12-30 | 2022-07-07 | 稿定(厦门)科技有限公司 | Double-buffering encoding system and control method therefor |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101150485A (en) * | 2007-11-15 | 2008-03-26 | 曙光信息产业(北京)有限公司 | A management method for network data transmission of zero copy buffer queue |
WO2009012572A2 (en) * | 2007-07-23 | 2009-01-29 | Redknee Inc. | Method and apparatus for data processing using queuing |
CN102096598A (en) * | 2010-12-30 | 2011-06-15 | 广州市聚晖电子科技有限公司 | Virtual machine system and implementing method thereof |
CN102761489A (en) * | 2012-07-17 | 2012-10-31 | 中国科学技术大学苏州研究院 | Inter-core communication method realizing data packet zero-copying based on pipelining mode |
CN104809027A (en) * | 2015-04-21 | 2015-07-29 | 浙江大学 | Data collection method based on lock-free buffer region |
CN105959161A (en) * | 2016-07-08 | 2016-09-21 | 中国人民解放军国防科学技术大学 | High-speed data packet construction and distribution control method and device |
CN105978968A (en) * | 2016-05-11 | 2016-09-28 | 山东合天智汇信息技术有限公司 | Real-time transmission processing method, server and system of mass data |
-
2017
- 2017-12-22 CN CN201711406104.2A patent/CN108170758A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009012572A2 (en) * | 2007-07-23 | 2009-01-29 | Redknee Inc. | Method and apparatus for data processing using queuing |
CN101150485A (en) * | 2007-11-15 | 2008-03-26 | 曙光信息产业(北京)有限公司 | A management method for network data transmission of zero copy buffer queue |
CN102096598A (en) * | 2010-12-30 | 2011-06-15 | 广州市聚晖电子科技有限公司 | Virtual machine system and implementing method thereof |
CN102761489A (en) * | 2012-07-17 | 2012-10-31 | 中国科学技术大学苏州研究院 | Inter-core communication method realizing data packet zero-copying based on pipelining mode |
CN104809027A (en) * | 2015-04-21 | 2015-07-29 | 浙江大学 | Data collection method based on lock-free buffer region |
CN105978968A (en) * | 2016-05-11 | 2016-09-28 | 山东合天智汇信息技术有限公司 | Real-time transmission processing method, server and system of mass data |
CN105959161A (en) * | 2016-07-08 | 2016-09-21 | 中国人民解放军国防科学技术大学 | High-speed data packet construction and distribution control method and device |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110275918A (en) * | 2019-06-17 | 2019-09-24 | 浙江百应科技有限公司 | A kind of million rank excel data quick and stable import systems |
CN111767339A (en) * | 2020-05-11 | 2020-10-13 | 北京奇艺世纪科技有限公司 | Data synchronization method and device, electronic equipment and storage medium |
CN111767339B (en) * | 2020-05-11 | 2023-06-30 | 北京奇艺世纪科技有限公司 | Data synchronization method and device, electronic equipment and storage medium |
CN111338583A (en) * | 2020-05-19 | 2020-06-26 | 北京数字绿土科技有限公司 | High-frequency data storage method, structure and computer |
CN112527844A (en) * | 2020-12-22 | 2021-03-19 | 北京明朝万达科技股份有限公司 | Data processing method and device and database architecture |
CN112764673A (en) * | 2020-12-28 | 2021-05-07 | 中国测绘科学研究院 | Storage rate optimization method and device, computer equipment and storage medium |
CN112764673B (en) * | 2020-12-28 | 2024-03-15 | 中国测绘科学研究院 | Hyperspectral linear array data storage rate optimization method, device and storage medium |
WO2022142157A1 (en) * | 2020-12-30 | 2022-07-07 | 稿定(厦门)科技有限公司 | Double-buffering encoding system and control method therefor |
CN113311994A (en) * | 2021-04-09 | 2021-08-27 | 中企云链(北京)金融信息服务有限公司 | Data caching method based on high concurrency |
CN114579053A (en) * | 2022-03-02 | 2022-06-03 | 统信软件技术有限公司 | Data reading and writing method and device, computing equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108170758A (en) | High concurrent date storage method and computer readable storage medium | |
US8438572B2 (en) | Task scheduling method and apparatus | |
US10671458B2 (en) | Epoll optimisations | |
US8892827B2 (en) | Cooperative memory management | |
US10733019B2 (en) | Apparatus and method for data processing | |
KR102466984B1 (en) | Improved function callback mechanism between a central processing unit (cpu) and an auxiliary processor | |
US9973512B2 (en) | Determining variable wait time in an asynchronous call-back system based on calculated average sub-queue wait time | |
US20110066830A1 (en) | Cache prefill on thread migration | |
KR20180053359A (en) | Efficient scheduling of multi-version tasks | |
US20110067029A1 (en) | Thread shift: allocating threads to cores | |
CN113051057A (en) | Multithreading data lock-free processing method and device and electronic equipment | |
US20120297216A1 (en) | Dynamically selecting active polling or timed waits | |
US10037225B2 (en) | Method and system for scheduling computing | |
US20130097382A1 (en) | Multi-core processor system, computer product, and control method | |
CN108021434A (en) | Data processing apparatus, method of processing data thereof, medium, and storage controller | |
CN114116155A (en) | Lock-free work stealing thread scheduler | |
CN115562838A (en) | Resource scheduling method and device, computer equipment and storage medium | |
WO2022160628A1 (en) | Command processing apparatus and method, electronic device, and computer-readable storage medium | |
US11392388B2 (en) | System and method for dynamic determination of a number of parallel threads for a request | |
US11474868B1 (en) | Sharded polling system | |
CN117573355A (en) | Task processing method, device, electronic equipment and storage medium | |
CN116795503A (en) | Task scheduling method, task scheduling device, graphic processor and electronic equipment | |
CN104769553A (en) | System and method for supporting work sharing muxing in a cluster | |
JP2012093832A (en) | Information processor | |
CN112114967B (en) | GPU resource reservation method based on service priority |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180615 |
|
RJ01 | Rejection of invention patent application after publication |