CN113986961B - Distributed high-concurrency message matching method - Google Patents

Distributed high-concurrency message matching method Download PDF

Info

Publication number
CN113986961B
CN113986961B CN202111270752.6A CN202111270752A CN113986961B CN 113986961 B CN113986961 B CN 113986961B CN 202111270752 A CN202111270752 A CN 202111270752A CN 113986961 B CN113986961 B CN 113986961B
Authority
CN
China
Prior art keywords
data
processor
cache
concurrency
backup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111270752.6A
Other languages
Chinese (zh)
Other versions
CN113986961A (en
Inventor
周鑫
陈忠国
李忱
江何
门殿春
孟繁荣
姚志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Testor Technology Co ltd
Beijing Tongtech Co Ltd
Original Assignee
Beijing Testor Technology Co ltd
Beijing Tongtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Testor Technology Co ltd, Beijing Tongtech Co Ltd filed Critical Beijing Testor Technology Co ltd
Priority to CN202111270752.6A priority Critical patent/CN113986961B/en
Publication of CN113986961A publication Critical patent/CN113986961A/en
Application granted granted Critical
Publication of CN113986961B publication Critical patent/CN113986961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24532Query optimisation of parallel queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a distributed high-concurrency message matching method, in particular to the technical field of high-concurrency service ends, which adopts a linear message queue arrangement mode to realize the queuing sequential processing of data in the process that the data cannot be processed immediately, ensures that each data can be processed, and ensures that the lower-layer cache of a processor can be subjected to self-adaptive distribution by matching with a leaky bucket algorithm mode, adopts a mode of parallel processing allowance of a backup processor, synchronously processes in the process of processing high-concurrency messages to form a double-bucket strategy, and can improve the data reading speed of two processors under the non-repeated action of the cache data in a plurality of middle layers by matching with a plurality of backup servers of a database, adopts a mode of synchronously reading different data by a plurality of same data sources to ensure the response speed of high access volume, the method is decisive for the stability of the database in a high reading state of adapting to high concurrent message matching.

Description

Distributed high-concurrency message matching method
Technical Field
The invention relates to the technical field of high-concurrency servers, in particular to a distributed high-concurrency message matching method.
Background
At the same time or in a very short time, a large number of requests arrive at the server, each request needs the server to consume resources for processing and to make corresponding feedback, and the server, for example, simultaneously starts the number of processes, and the number of threads, network connections, CPU operations, I/O and memory which can be simultaneously operated are limited, so that the server can simultaneously process the requests and is also limited. The high concurrency nature is the limitation of resources, such as: the number of online users of the system is 10W, which does not mean that the number of concurrent users of the system is 10W, and it may exist that 10W users view static articles on a home page at the same time, and the concurrent number is high enough to send a request to a server according to the number of real users of the system and the request that the server consumes resources to process, and the server can only open 100 threads, and 1 thread consumes 1s to process one request exactly, so the server can only process 100 requests 1 s.
Under the condition of access of large data, a large amount of data is read, so that the data storage pressure of the data is synchronously increased, the safety of the data cannot be effectively guaranteed, redundant requests cannot be processed, as for high concurrency of the data, the data processing cannot realize real-time processing, and meanwhile, under the continuous product pressure of a large amount of data, the database has large pressure impact on a database server and the response speed is reduced under the continuous high access quantity, so that the matching efficiency of a processor for the high concurrency messages is further restricted, and the optimization processing is needed.
Disclosure of Invention
In order to overcome the above defects in the prior art, the present invention provides a distributed high-concurrency message matching method, and the technical problem to be solved by the present invention is: when a high concurrency phenomenon occurs, the current server is difficult to effectively realize non-stop maintenance, and simultaneously is difficult to effectively and stably process in a high concurrency state, so that the high concurrency state is restricted by the maximum thread number of a single core, the processing efficiency of the high concurrency state is difficult to effectively cooperate, and the response speed of a database is greatly impacted by pressure, so that the problems of slow response and restriction on the efficiency of message matching are caused.
In order to achieve the purpose, the invention provides the following technical scheme: a distributed high-concurrency message matching method comprises the following steps:
s1, hash calculation is achieved through a MurmurHash calculation mode, a processor-based lower-layer access data cache is established, and cache messages are arranged in a linear queue through java' S TreeMap.
S2, establishing an intermediate layer based on data caching between the processor and the mysql, and establishing a search engine based on the intermediate layer located in the mysql.
And S3, introducing a leaky bucket algorithm into the linear queue data buffer at the lower layer of the processor, and realizing a smooth flow limiting strategy of burst flow.
And S4, establishing a plurality of data backup servers of the mysql lower layer, wherein the plurality of data backup servers are loaded with independent mysql.
And S5, establishing middle layers on mysql corresponding to the data backup servers and connecting the middle layers with the same processor.
And S6, retrieving data among the middle layers of the data backup servers, and keeping the difference of the cached data.
And S7, introducing the information rejected by the smooth flow limiting strategy into a backup processor, and realizing double-barrel flow distribution processing of data flow limitation.
As a further scheme of the invention: the backup processor is connected with a plurality of middle layers corresponding to the main processor, and the lower layer of the backup processor synchronously builds a secondary linear queue data cache and introduces a leaky bucket algorithm.
As a further scheme of the invention: the difference of the cached data is that the cached data is processed in order of priority based on non-duplication of the cached data to keep the cache capacity at a minimum of ninety percent occupancy and not cleared.
As a further scheme of the invention: the leaky bucket algorithm comprises the following steps:
a: and setting the maximum capacity of the linear queue data buffer according to the data buffer at the lower layer of the processor.
b: and setting the maximum data cache leakage speed of the lower layer of the processor according to the maximum parallel processing amount and the speed of the processor.
c: and acquiring the total data processing amount according to the product of the data caching processing speed and the time interval.
d: judging the data cache capacity allowance of the linear queue, if not, continuing to input data, if the capacity is full, refusing data input, and storing the data into a secondary linear queue data cache corresponding to the backup processor for synchronous processing.
As a further scheme of the invention: the working process of the memcache comprises the following steps:
a: and checking whether the request data of the client is in the memcache.
b: if the request data is directly returned, no operation is performed on the database, if the requested data is not in the memcache, the database is accessed, the data is obtained from the database through the established search engine and returned to the client, and meanwhile, one copy of the data is cached in the memcache.
As a further scheme of the invention:
the invention has the beneficial effects that: the middle tier includes a distributed cache system memcache developed by bradfitzpatrick of livejournal.
The invention adopts a linear message queue arrangement mode to realize the queue sequential processing of data in the process that the data cannot be processed immediately, ensures that each data can be processed, and is matched with a leaky bucket algorithm mode to realize the self-adaptive distribution of the lower-layer cache of the processor, adopts a mode of parallel processing allowance of a backup processor to realize the switching maintenance of two processors under the long-time use and debugging states, simultaneously synchronously processes in the processing process of high concurrent messages to form a double-bucket strategy, is matched with a plurality of backup servers of a database to ensure that the data is not easy to lose, can improve the data reading speed of the two processors under the non-repeated action of the cache data in a plurality of middle layers, adopts a mode of synchronously reading different data by a plurality of same data sources to ensure the response speed of high access amount, the method is decisive for the stability of the database in a high reading state of adapting to high concurrent message matching.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
a distributed high-concurrency message matching method comprises the following steps:
s1, hash calculation is achieved through a MurmurHash calculation mode, a processor-based lower-layer access data cache is established, and cache messages are arranged in a linear queue through java' S TreeMap.
S2, establishing an intermediate layer based on data caching between the processor and the mysql, and establishing a search engine based on the intermediate layer located in the mysql.
And S3, introducing a leaky bucket algorithm into the linear queue data buffer at the lower layer of the processor, and realizing a smooth flow limiting strategy of burst flow.
And S4, establishing a plurality of data backup servers of the mysql lower layer, wherein the plurality of data backup servers are loaded with independent mysql.
And S5, establishing middle layers on mysql corresponding to the data backup servers and connecting the middle layers with the same processor.
And S6, retrieving data among the middle layers of the data backup servers, and keeping the difference of the cached data.
And S7, introducing the information rejected by the smooth flow limiting strategy into a backup processor, and realizing double-barrel flow distribution processing of data flow limitation.
By establishing the access data cache of the lower layer of the processor, the processor can have a good buffering effect on high-concurrency messages, and meanwhile, a linear queue strategy mode is established, so that the processor can asynchronously process requests in a high-concurrency distributed environment, and the pressure of a system is relieved.
By establishing the search engine which is based on the middle layer and is positioned in the mysql, the quick access of the data positioned in the mysql database in the middle layer can be realized, the horizontal retrieval of the middle layer cache data is synchronously carried out, the data quantity cached in the middle layer is expanded, and the improvement of the response efficiency of the whole data is facilitated.
By synchronously adopting the mode of the framework, the transverse expansion of more than two processors can be carried out according to the high-concurrency requirement, the performance change of the whole framework is more diversified, and the bearing capacity under the high concurrency can be expanded according to the requirement.
In other embodiments, the backup processor is connected with a plurality of middle layers corresponding to the main processor, and the lower layer of the backup processor synchronously builds a secondary linear queue data cache and introduces a leaky bucket algorithm.
The backup processor is connected with the corresponding middle layers, so that the backup processor and the main processor form resource sharing under the condition of definite priority, and the resource calling under the dual-core state has certain guarantee performance; meanwhile, a mode of constructing a secondary linear queue data cache and introducing a leaky bucket algorithm is adopted, so that a branch connected with a main link is formed, a double-bucket strategy capable of efficiently processing request data is formed, single-core instant processing under a non-high concurrency state is realized, and branch shunting processing is carried out after a main linear queue is fully loaded under a high concurrency state.
In other embodiments, the difference of the cached data is that the cached data is processed in order of priority based on non-duplication condition of the cached data to keep the cache capacity at a minimum of ninety percent occupancy and not cleared.
In other embodiments, the leaky bucket algorithm comprises the steps of:
a: setting the maximum capacity of a linear queue data cache according to the data cache of the lower layer of the processor;
b: setting the maximum data cache leakage speed of the lower layer of the processor according to the maximum parallel processing amount and the speed of the processor;
c: acquiring the total data processing capacity according to the product of the data caching processing speed and the time interval;
d: judging the data cache capacity allowance of the linear queue, if not, continuing to input data, if the capacity is full, refusing data input, and storing the data into a secondary linear queue data cache corresponding to the backup processor for synchronous processing.
By adopting the leaky bucket algorithm, the transmission rate of data can be forcibly limited, so that the data transmission is more stable, the pressure on the server under the high concurrency condition is reduced, and the overall stability of the server is improved.
In other embodiments, the middle tier comprises a distributed cache system memcache developed by bradfitzpatrick of livejournel.
In other embodiments, the memcache workflow includes:
a: checking whether the request data of the client is in the memcache;
b: if yes, the request data is directly returned, and no operation is performed on the database; and if the requested data is not in the memcache, accessing the database, acquiring the data from the database through the established search engine, returning the data to the client, and caching one copy of the data into the memcache.
The intermediate layer memcache is set to maintain a uniform huge hash table in the memory, so that the intermediate layer memcache can be used for storing data in various formats, including images, videos, files and database retrieval results, calling the data to the memory and then reading the data from the memory, and the reading speed is greatly improved.
Example 2:
a distributed high-concurrency message matching method comprises the following steps:
s1, hash calculation is realized through a MurmurHash calculation mode, an access data cache based on a lower layer of the processor is established, and cache messages are arranged in a linear queue through treeMap of java.
S2, establishing an intermediate layer based on data caching between the processor and the mysql, and establishing a search engine based on the intermediate layer located in the mysql.
And S3, introducing a leaky bucket algorithm into the linear queue data buffer at the lower layer of the processor, and realizing a smooth flow limiting strategy of burst flow.
And S4, establishing a plurality of data backup servers of the mysql lower layer, wherein the plurality of data backup servers are loaded with independent mysql.
And S5, establishing middle layers on mysql corresponding to the data backup servers and connecting the middle layers with the same processor.
And S6, retrieving data among the middle layers of the data backup servers, and keeping the difference of the cached data.
The difference of the cached data is that the cached data is processed in order of priority based on non-duplication of the cached data to keep the cache capacity at a minimum of ninety percent occupancy and not cleared.
As a further scheme of the invention: the leaky bucket algorithm comprises the following steps:
a: and setting the maximum capacity of the linear queue data buffer according to the data buffer at the lower layer of the processor.
b: and setting the maximum data cache leakage speed of the lower layer of the processor according to the maximum parallel processing amount and the speed of the processor.
c: and acquiring the total data processing amount according to the product of the data caching processing speed and the time interval.
d: judging the data cache capacity allowance of the linear queue, if not, continuing to input data, if the capacity is full, refusing data input, and storing the data into a secondary linear queue data cache corresponding to the backup processor for synchronous processing.
The working process of the memcache comprises the following steps:
a: and checking whether the request data of the client is in the memcache.
b: if the request data is directly returned, no operation is performed on the database, if the requested data is not in the memcache, the database is accessed, the data is obtained from the database through the established search engine and returned to the client, and meanwhile, one copy of the data is cached in the memcache.
The invention has the beneficial effects that: the middle tier includes a distributed cache system memcache developed by bradfitzpatrick of livejournal.
Example 3:
a distributed high-concurrency message matching method comprises the following steps:
s1, hash calculation is achieved through a MurmurHash calculation mode, a processor-based lower-layer access data cache is established, and cache messages are arranged in a linear queue through java' S TreeMap.
S2, establishing an intermediate layer based on data caching between the processor and the mysql, and establishing a search engine based on the intermediate layer located in the mysql.
And S3, introducing a leaky bucket algorithm into the linear queue data buffer at the lower layer of the processor, and realizing a smooth flow limiting strategy of burst flow.
And S4, introducing the information rejected by the smooth flow limiting strategy into a backup processor, and realizing double-barrel flow distribution processing of data flow limitation.
The backup processor is connected with the middle layer corresponding to the main processor, and the lower layer of the backup processor synchronously builds a secondary linear queue data cache and introduces a leaky bucket algorithm.
As a further scheme of the invention: the leaky bucket algorithm comprises the following steps:
a: and setting the maximum capacity of the linear queue data buffer according to the data buffer at the lower layer of the processor.
b: and setting the maximum data cache leakage speed of the lower layer of the processor according to the maximum parallel processing amount and the speed of the processor.
c: and acquiring the total data processing amount according to the product of the data caching processing speed and the time interval.
d: judging the allowance of the linear queue data cache capacity, if not, continuing to input data, if the capacity is full, refusing data input, and storing the data into a secondary linear queue data cache corresponding to the backup processor for synchronous processing.
The working process of the memcache comprises the following steps:
a: and checking whether the request data of the client is in the memcache.
b: if the request data is directly returned, no operation is performed on the database, if the requested data is not in the memcache, the database is accessed, the data is obtained from the database through the established search engine and returned to the client, and meanwhile, one copy of the data is cached in the memcache.
The invention has the beneficial effects that: the middle tier includes a distributed cache system memcache developed by bradfitzpatrick of livejournal.
In conclusion, the present invention: the method is characterized in that the cache messages are arranged in a linear queue based on the lower layer of the processor, and the method can be independently adopted in any mode by matching with a double-barrel strategy and reading and caching different data by multiple cache nodes of a data backup library, so that the overall matching effect and performance are outstanding, and the targeted optimization effect is realized on the systematic high-concurrency message coping and matching.
The points to be finally explained are: although the present invention has been described in detail with reference to the general description and the specific embodiments, on the basis of the present invention, the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. A distributed high-concurrency message matching method is characterized by comprising the following steps:
s1, hash calculation is achieved through a MurmurHash calculation mode, a processor-based lower-layer access data cache is established, and cache messages are arranged in a linear queue through java treeMap;
s2, establishing an intermediate layer based on data caching between the processor and the mysql, and establishing a search engine which is based on the intermediate layer and is positioned in the mysql;
s3, introducing a leaky bucket algorithm into the linear queue data buffer at the lower layer of the processor to realize a smooth flow limiting strategy of burst flow;
s4, establishing a plurality of data backup servers of the mysql lower layer, wherein the plurality of data backup servers are loaded with independent mysql;
s5, establishing intermediate layers on mysql corresponding to the data backup servers and connecting the intermediate layers with the same processor;
s6, retrieving data among the middle layers of the data backup servers, and keeping the difference of cache data;
and S7, introducing the information rejected by the smooth flow limiting strategy into a backup processor, and realizing double-barrel flow distribution processing of data flow limitation.
2. The distributed high-concurrency message matching method according to claim 1, wherein: the backup processor is connected with a plurality of middle layers corresponding to the main processor, and the lower layer of the backup processor synchronously builds a secondary linear queue data cache and introduces a leaky bucket algorithm.
3. The distributed high-concurrency message matching method according to claim 1, wherein: the difference of the cached data is that the cached data is processed in order of priority based on non-duplication of the cached data to keep the cache capacity at a minimum of ninety percent occupancy and not cleared.
4. The distributed high-concurrency message matching method according to claim 1, wherein: the leaky bucket algorithm comprises the following steps:
a: setting the maximum capacity of a linear queue data cache according to the data cache of the lower layer of the processor;
b: setting the maximum data cache leakage speed of the lower layer of the processor according to the maximum parallel processing amount and the speed of the processor;
c: acquiring the total data processing capacity according to the product of the data caching processing speed and the time interval;
d: judging the allowance of the linear queue data cache capacity, if not, continuing to input data, if the capacity is full, refusing data input, and storing the data into a secondary linear queue data cache corresponding to the backup processor for synchronous processing.
5. The distributed high-concurrency message matching method according to claim 1, wherein: the middle tier includes a distributed cache system memcache developed by bradfitzpatrick of livejournal.
6. The distributed high-concurrency message matching method according to claim 5, wherein: the working process of the memcache comprises the following steps:
a: checking whether the request data of the client is in the memcache;
b: if the request data is directly returned, no operation is performed on the database, if the requested data is not in the memcache, the database is accessed, the data is obtained from the database through the established search engine and returned to the client, and meanwhile, one copy of the data is cached in the memcache.
CN202111270752.6A 2021-10-29 2021-10-29 Distributed high-concurrency message matching method Active CN113986961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111270752.6A CN113986961B (en) 2021-10-29 2021-10-29 Distributed high-concurrency message matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111270752.6A CN113986961B (en) 2021-10-29 2021-10-29 Distributed high-concurrency message matching method

Publications (2)

Publication Number Publication Date
CN113986961A CN113986961A (en) 2022-01-28
CN113986961B true CN113986961B (en) 2022-05-20

Family

ID=79744257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111270752.6A Active CN113986961B (en) 2021-10-29 2021-10-29 Distributed high-concurrency message matching method

Country Status (1)

Country Link
CN (1) CN113986961B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453398A (en) * 2007-12-06 2009-06-10 怀特威盛软件公司 Novel distributed grid super computer system and method
CN103268321A (en) * 2013-04-19 2013-08-28 中国建设银行股份有限公司 Data processing method and device for high concurrency transaction
CN104657143A (en) * 2015-02-12 2015-05-27 中復保有限公司 High-performance data caching method
CN108183961A (en) * 2018-01-04 2018-06-19 中电福富信息科技有限公司 A kind of distributed caching method based on Redis
CN111078426A (en) * 2019-12-03 2020-04-28 紫光云(南京)数字技术有限公司 High concurrency solution under back-end micro-service architecture
CN111314397A (en) * 2018-12-11 2020-06-19 北京奇虎科技有限公司 Message processing method and device based on Swoole framework and Yaf framework
CN112699154A (en) * 2021-03-25 2021-04-23 上海洋漪信息技术有限公司 Multi-level caching method for large-flow data
CN113392132A (en) * 2021-05-07 2021-09-14 杭州数知梦科技有限公司 Distributed caching method and system for IOT scene
CN113553346A (en) * 2021-07-22 2021-10-26 中国电子科技集团公司第十五研究所 Large-scale real-time data stream integrated processing, forwarding and storing method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1770954A1 (en) * 2005-10-03 2007-04-04 Amadeus S.A.S. System and method to maintain coherence of cache contents in a multi-tier software system aimed at interfacing large databases
US10360159B1 (en) * 2013-12-12 2019-07-23 Groupon, Inc. System, method, apparatus, and computer program product for providing a cache mechanism
US9591101B2 (en) * 2014-06-27 2017-03-07 Amazon Technologies, Inc. Message batching in a distributed strict queue

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453398A (en) * 2007-12-06 2009-06-10 怀特威盛软件公司 Novel distributed grid super computer system and method
CN103268321A (en) * 2013-04-19 2013-08-28 中国建设银行股份有限公司 Data processing method and device for high concurrency transaction
CN104657143A (en) * 2015-02-12 2015-05-27 中復保有限公司 High-performance data caching method
CN108183961A (en) * 2018-01-04 2018-06-19 中电福富信息科技有限公司 A kind of distributed caching method based on Redis
CN111314397A (en) * 2018-12-11 2020-06-19 北京奇虎科技有限公司 Message processing method and device based on Swoole framework and Yaf framework
CN111078426A (en) * 2019-12-03 2020-04-28 紫光云(南京)数字技术有限公司 High concurrency solution under back-end micro-service architecture
CN112699154A (en) * 2021-03-25 2021-04-23 上海洋漪信息技术有限公司 Multi-level caching method for large-flow data
CN113392132A (en) * 2021-05-07 2021-09-14 杭州数知梦科技有限公司 Distributed caching method and system for IOT scene
CN113553346A (en) * 2021-07-22 2021-10-26 中国电子科技集团公司第十五研究所 Large-scale real-time data stream integrated processing, forwarding and storing method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A distributed cache for hadoop distributed file system in real-time cloud services";J Zhang, 等;《2012 ACM/IEEE 13th International Conference on Grid Computing》;20121004;12-20 *
"Sundial: harmonizing concurrency control and caching in a distributed OLTP database management system";Xiangyao Yu 等;《https://doi.org/10.14778/3231751.3231763》;20180601;1-14 *
Memcached分布式缓存系统的应用;常广炎;《电脑编程技巧与维护》;20170403(第07期);24-25 *
基于i.lon的高性能数据采集方案研究;黄竞斌等;《计算机工程与设计》;20110616(第06期);全文 *

Also Published As

Publication number Publication date
CN113986961A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN106953901B (en) Cluster communication system and method for improving message transmission performance
CN108183961A (en) A kind of distributed caching method based on Redis
CN106170016A (en) A kind of method and system processing high concurrent data requests
CN104735110B (en) Metadata management method and system
CN107450855B (en) Model-variable data distribution method and system for distributed storage
CN113377868B (en) Offline storage system based on distributed KV database
CN102316097B (en) Streaming media scheduling and distribution method capable of reducing wait time of user
CN104050102B (en) Object storage method and device in a kind of telecommunication system
Zhang et al. Survey of research on big data storage
CN113655969A (en) Data balanced storage method based on streaming distributed storage system
CN105975345A (en) Video frame data dynamic equilibrium memory management method based on distributed memory
CN109165096A (en) The caching of web cluster utilizes system and method
CN113986961B (en) Distributed high-concurrency message matching method
CN105760391A (en) Data dynamic redistribution method and system, data node and name node
CN114048186A (en) Data migration method and system based on mass data
CN110784498B (en) Personalized data disaster tolerance method and device
CN110515938A (en) Data convergence storage method, equipment and storage medium based on KAFKA messaging bus
CN106549983A (en) The access method and terminal of a kind of database, server
CN115114294A (en) Self-adaption method and device of database storage mode and computer equipment
CN109032502A (en) A kind of distributed data cluster storage system
Furuya et al. Load balancing method for data management using high availability distributed clusters
He et al. Replicate distribution method of minimum cost in cloud storage for Internet of things
KR100952166B1 (en) Method and apparatus for data version management of grid database
CN108737156A (en) One kind waiting NameNode distributed file systems and wiring method based on multipair
Chen et al. A faster read and less storage algorithm for small files on Hadoop

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant