CN109669960A - The system and method for caching snowslide is avoided by multi-level buffer in micro services - Google Patents

The system and method for caching snowslide is avoided by multi-level buffer in micro services Download PDF

Info

Publication number
CN109669960A
CN109669960A CN201811595142.1A CN201811595142A CN109669960A CN 109669960 A CN109669960 A CN 109669960A CN 201811595142 A CN201811595142 A CN 201811595142A CN 109669960 A CN109669960 A CN 109669960A
Authority
CN
China
Prior art keywords
data
micro services
checked
cache
cloud platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811595142.1A
Other languages
Chinese (zh)
Inventor
王智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taima Information Network Technology Co Ltd
Original Assignee
Taima Information Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taima Information Network Technology Co Ltd filed Critical Taima Information Network Technology Co Ltd
Priority to CN201811595142.1A priority Critical patent/CN109669960A/en
Publication of CN109669960A publication Critical patent/CN109669960A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A kind of system and method for avoiding caching snowslide by multi-level buffer in micro services of the present invention, multiple micro services cloud platforms including gateway, equipped with level cache, Redis database and MySQL database as L2 cache;Gateway is for being distributed to corresponding micro services cloud platform for the data inquiry request according to load balancing distribution principle when receiving data inquiry request;Corresponding micro services cloud platform is for inquiring in local level cache with the presence or absence of data to be checked, data to be checked are directly returned when to be, Redis database is inquired when to be no, there are obtain data to be checked when data to be checked and feed back to gateway in L2 cache, MySQL database is inquired when data to be checked being not present in L2 cache, and by the data feedback inquired to gateway.The present invention utilizes the expired time of multi-level buffer and differentiation, to avoid caching snowslide problem caused by the data of multiple cache nodes are expired simultaneously.

Description

The system and method for caching snowslide is avoided by multi-level buffer in micro services
Technical field
The present invention relates to micro services caching technology fields, are avoided in micro services by multi-level buffer more particularly to one kind Cache the system and method for snowslide.
Background technique
It is the centralized caching technology of representative also by more and more extensive fortune using Redis with popularizing for micro services technology It uses in each cloud platform.Some inevitable disadvantages have also been introduced in Redis while improving throughput of system: as every time Inquiry request require by network access Redis in it is data cached, also increase network delay while increasing bandwidth. In addition, instantaneous a large amount of request will directly access the database, this after the data in Redis caching reach expired time Situation will cause database corruption, referred to as caching snowslide.
Summary of the invention
Object of the present invention is to solve the problems, such as that centralized caching bandwidth occupies height and avoids caching snowslide, one kind is provided micro- The system and method for caching snowslide is avoided in service by multi-level buffer.
The present invention is to solve above-mentioned technical problem by following technical proposals:
The present invention provides a kind of system for avoiding caching snowslide by multi-level buffer in micro services, it is characterized in that, Multiple micro services cloud platforms including gateway, equipped with level cache, Redis database and MySQL data as L2 cache Library;
The gateway is used for the data inquiry request when receiving data inquiry request according to load balancing distribution principle It is distributed to corresponding micro services cloud platform;
The corresponding micro services cloud platform whether there is data to be checked for inquiring in local level cache, Data to be checked are directly returned when to be, Redis database are inquired when to be no, there are numbers to be checked in L2 cache According to when obtain data to be checked and feed back to gateway, inquire MySQL number when data to be checked being not present in L2 cache According to library, there are obtaining data to be checked when data to be checked and feeding back to gateway in MySQL database, in MySQL number The information without this data is fed back to gateway according to when data to be checked being not present in library.
Preferably, the corresponding micro services cloud platform is used to tie inquiry regardless of whether inquiring data to be checked Fruit is broadcast to other micro services cloud platforms, and other micro services cloud platforms are for caching the query result, the query result packet The result for including data inquiry request and whether inquiring.
Preferably, those micro services cloud platforms are all made of ConcurrentHashMap technology to store level cache data, Every in level cache data cached include cache data content, data cached creation time, data cached expired time and Data cached hit-count;
The micro services cloud platform is used for certain the data cached data cached expired time in local level cache and arrives Up to when judge whether the data cached data cached hit-count of this is lower than a given threshold, deleted when to be this caching Data, when to be no by the data cached data buffer storage new as one of this into local level cache.
Preferably, the data cached data cached expired time in different level caches of this is different, and/or, it should Item is data cached different with the data cached expired time in L2 cache in level cache.
Preferably, the system also includes Kafka component, the Kafka component is used for receive those micro services clouds flat When a micro services cloud platform more new data in platform, this more new data is synchronized to other in those micro services cloud platforms In micro services cloud platform.
The method that the present invention also provides a kind of to avoid caching snowslide by multi-level buffer in micro services, it is characterized in that, Itself the following steps are included:
The data inquiry request is distributed to when receiving data inquiry request according to load balancing distribution principle by S1, gateway Corresponding micro services cloud platform;
It whether there is data to be checked in the local level cache of S2, the inquiry of corresponding micro services cloud platform, be yes When enter step S3, otherwise enter step S4;
S3, corresponding micro services cloud platform directly return to data to be checked;
S4, corresponding micro services cloud platform inquiry Redis database whether there is data to be checked, enter when to be Step S5, otherwise enters step S6;
S5, corresponding micro services cloud platform obtain data to be checked and feed back to gateway;
S6, corresponding micro services cloud platform inquiry MySQL database whether there is data to be checked, enter when to be Step S7, otherwise enters step S8;
S7, corresponding micro services cloud platform obtain data to be checked and feed back to gateway
S8, information of the corresponding micro services cloud platform feedback without this data are to gateway.
Preferably, in after step s8, corresponding micro services cloud platform is regardless of whether to inquire data to be checked equal Query result is broadcast to other micro services cloud platforms, other micro services cloud platforms cache the query result, the inquiry knot Fruit includes data inquiry request and the result that whether inquires.
Preferably, those micro services cloud platforms are all made of ConcurrentHashMap technology to store level cache data, Every in level cache data cached include cache data content, data cached creation time, data cached expired time and Data cached hit-count;
The micro services cloud platform data cached expired time that certain data cached in local level cache judges when reaching Whether the data cached data cached hit-count of this is lower than a given threshold, and it is data cached to delete this when to be, When being no by the data cached data buffer storage new as one of this into local level cache.
Preferably, the data cached data cached expired time in different level caches of this is different, and/or, it should Item is data cached different with the data cached expired time in L2 cache in level cache.
Preferably, Kafka component is when receiving a micro services cloud platform more new data in those micro services cloud platforms, This more new data is synchronized in other micro services cloud platforms in those micro services cloud platforms.
On the basis of common knowledge of the art, above-mentioned each optimum condition, can any combination to get each preferable reality of the present invention Example.
The positive effect of the present invention is that:
The present invention is using micro services local memory as level cache (de-centralized), using Redis as L2 cache (centralization) just accesses database request data in the absence of data are equal in level-one, L2 cache.
The present invention utilizes the expired time of multi-level buffer and differentiation, to avoid the data of multiple cache nodes expired simultaneously Caused by caching snowslide problem.The presence of level cache mostly to request access to local memory, without logical Network access L2 cache Redis is crossed, so as to greatly improve the response time of system and save bandwidth resources.
Detailed description of the invention
Fig. 1 is the signal of the system for avoiding caching snowslide by multi-level buffer in micro services of present pre-ferred embodiments Figure.
Fig. 2 is the process of the method for avoiding caching snowslide by multi-level buffer in micro services of present pre-ferred embodiments Figure.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
As shown in Figure 1, the system that the present embodiment provides a kind of to avoid caching snowslide by multi-level buffer in micro services, Multiple micro services cloud platforms including gateway, equipped with level cache, Redis database and MySQL data as L2 cache Library.
The gateway is used for the data inquiry request when receiving data inquiry request according to load balancing distribution principle It is distributed to corresponding micro services cloud platform.
Such as: such as Fig. 1, gateway asks the data query according to load balancing distribution principle when receiving data inquiry request Seek the micro services cloud platform A1 for being distributed to current relative free.
The corresponding micro services cloud platform whether there is data to be checked for inquiring in local level cache, Data to be checked are directly returned when to be, Redis database are inquired when to be no, there are numbers to be checked in L2 cache According to when obtain data to be checked and feed back to gateway, inquire MySQL number when data to be checked being not present in L2 cache According to library, there are obtaining data to be checked when data to be checked and feeding back to gateway in MySQL database, in MySQL number The information without this data is fed back to gateway according to when data to be checked being not present in library.
Such as: it whether there is data to be checked in the local level cache of micro services cloud platform A1 inquiry, when to be Data to be checked are directly returned, Redis database are inquired when to be no, there are obtain when data to be checked in L2 cache It takes data to be checked and feeds back to gateway, inquire MySQL database when data to be checked being not present in L2 cache, and By the data feedback inquired to gateway.
In the present embodiment, micro services cloud platform A1 broadcasts query result regardless of whether inquiring data to be checked To other micro services cloud platform A2 and A3, other micro services cloud platform A2 and A3 cache the query result, the query result Including data inquiry request and the result whether inquired.
No matter whether inquired in local level cache data to be checked query result is broadcast to it is other micro- Service cloud platform, to solve caching penetration problem.Assuming that a data inquiry request has been passed to the data Id being not present, Then the data 100% are not present in level cache, L2 cache and database.Request all can be direct through caching every time at this time Database is accessed, the burden of database has been aggravated, such situation, which is referred to as to cache, to be penetrated.And the present embodiment passes through regardless of whether looking into It askes data to be checked and query result is broadcast to other micro services cloud platforms, in this way, flat in other micro services clouds Platform can not need to inquire L2 cache and database again when receiving the data inquiry request, but directly return without this number According to information.
In order to which quick-searching is data cached, in the case where considering big concurrent, those micro services cloud platforms are all made of ConcurrentHashMap technology stores level cache data, and data cached every in level cache includes data cached Content, data cached creation time, data cached expired time and data cached hit-count.
Micro services cloud platform A1, A2 or A3 certain the data cached data cached expired time in local level cache arrive Up to when judge whether the data cached data cached hit-count of this is lower than a given threshold, deleted when to be this caching Data, when to be no by the data cached data buffer storage new as one of this into local level cache.
In the present embodiment, in order to solve the problems, such as caching snowslide, it is data cached in different level caches that this is set Data cached expired time it is different, the data cached data cached expired time in level cache and L2 cache of this is not Together.By can utmostly avoid multi-level buffer simultaneously for configuration variance expired time in level cache and L2 cache Expired situation, if level cache 5 minutes expired, L2 cache 6 minutes expired.In addition between different level caches, mistake Time phase is also different.
It can be added at random the expired gain time by Random class after each caching example starting, therefore different level-ones The data expired time of caching has little bit different.Accordingly even when some level cache data is expired, due to micro services itself Load balancing characteristic, a large amount of request can be divided into each micro services cloud platform, therefore only had fraction request and passed through Level cache requests L2 cache.Assuming that L2 cache data are just also expired at this time, MySQL database can be just requested. Amount of access very little at this time will not cause too big burden to MySQL database.After MySQL database inquiry, data Expired level cache node can be updated using the data inquired as new data into memory, and be broadcast to other level caches Node.While data update, expired time can be also updated other level cache nodes.Therefore some hot spot data is expired The case where, it often only appears on a small amount of level cache node, only having a small amount of request at this time can request second level slow The case where depositing or MySQL database, being thus not in caching snowslide.
In order to solve the problems, such as that data are synchronous between caching.It is real often to there are multiple micro services cloud platforms in the same micro services Such as A1, A2 and A3.When one of micro services cloud platform updates certain data, need this change data being synchronized to it In his micro services example A2 and A3.The present invention sends using message-oriented middlewares such as Kafka and receives updated data.Data become More square to be used as sender, other related micro services are as recipient.Specifically, described the system also includes Kafka component Kafka component is used for when receiving a micro services cloud platform more new data in those micro services cloud platforms, this is updated Data are synchronized in other micro services cloud platforms in those micro services cloud platforms.
The Data Consistency of level cache, L2 cache and database.The number of L2 cache Redis in the present invention According to and the data of database be to be guaranteed by local matter or be updated successfully simultaneously or do not updated.Therefore the two is held Data be strongly consistent.The data of level cache realized when updating by Kafka broadcast there are a little delays, most It is consistency eventually.
As shown in Fig. 2, the present embodiment also provides a kind of method for avoiding caching snowslide by multi-level buffer in micro services, Itself the following steps are included:
Step 101, gateway are when receiving data inquiry request according to load balancing distribution principle by the data inquiry request It is distributed to corresponding micro services cloud platform;
It whether there is data to be checked in the local level cache of step 102, the inquiry of corresponding micro services cloud platform, 103 are entered step when to be, otherwise entering step 104;
Step 103, corresponding micro services cloud platform directly return to data to be checked;
Step 104, corresponding micro services cloud platform inquiry Redis database whether there is data to be checked, be yes When enter step 105, otherwise enter step 106;
Step 105, corresponding micro services cloud platform obtain data to be checked and feed back to gateway;
Step 106, corresponding micro services cloud platform inquiry MySQL database whether there is data to be checked, be yes When enter step 107, otherwise enter step 108;
Step 107, corresponding micro services cloud platform obtain data to be checked and feed back to gateway
Step 108, information of the corresponding micro services cloud platform feedback without this data are to gateway.
After step 108, corresponding micro services cloud platform ties inquiry regardless of whether inquiring data to be checked Fruit is broadcast to other micro services cloud platforms, and other micro services cloud platforms cache the query result, which includes number According to inquiry request and the result whether inquired.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed Protection scope of the present invention is each fallen with modification.

Claims (10)

1. a kind of system for avoiding caching snowslide by multi-level buffer in micro services, which is characterized in that it includes gateway, is equipped with Multiple micro services cloud platforms of level cache, Redis database and MySQL database as L2 cache;
The gateway is for distributing the data inquiry request according to load balancing distribution principle when receiving data inquiry request To corresponding micro services cloud platform;
The corresponding micro services cloud platform is being yes for inquiring with the presence or absence of data to be checked in local level cache When directly return to data to be checked, when to be no inquire Redis database, there are when data to be checked in L2 cache It obtains data to be checked and feeds back to gateway, inquire MySQL database when data to be checked being not present in L2 cache, There are obtaining data to be checked when data to be checked and feed back to gateway in MySQL database, in MySQL database There is no the information without this data is fed back when data to be checked to gateway.
2. the system for avoiding caching snowslide by multi-level buffer in micro services as described in claim 1, which is characterized in that institute State corresponding micro services cloud platform for regardless of whether inquire data to be checked query result is broadcast to it is other micro- Service cloud platform, other micro services cloud platforms for caching the query result, the query result include data inquiry request and The result whether inquired.
3. the system for avoiding caching snowslide by multi-level buffer in micro services as described in claim 1, which is characterized in that should Service cloud platform slightly is all made of ConcurrentHashMap technology to store level cache data, every in level cache Data cached includes cache data content, data cached creation time, data cached expired time and data cached hit-count;
When data cached expired time of the micro services cloud platform for certain data cached in local level cache reaches Judge whether the data cached data cached hit-count of this is lower than a given threshold, this is deleted when to be and caches number According to when to be no by the data cached data buffer storage new as one of this into local level cache.
4. the system for avoiding caching snowslide by multi-level buffer in micro services as claimed in claim 3, which is characterized in that should The data cached data cached expired time in different level caches of item is different, and/or, this is data cached slow in level-one It deposits different with the data cached expired time in L2 cache.
5. the system for avoiding caching snowslide by multi-level buffer in micro services as described in claim 1, which is characterized in that institute The system of stating further includes Kafka component, and the Kafka component is for receiving a micro services cloud in those micro services cloud platforms When platform more new data, this more new data is synchronized in other micro services cloud platforms in those micro services cloud platforms.
6. it is a kind of in micro services by multi-level buffer avoid caching snowslide method, which is characterized in that itself the following steps are included:
The data inquiry request is distributed to correspondence according to load balancing distribution principle when receiving data inquiry request by S1, gateway Micro services cloud platform;
Whether there is data to be checked in the local level cache of S2, the inquiry of corresponding micro services cloud platform, when to be into Enter step S3, otherwise enters step S4;
S3, corresponding micro services cloud platform directly return to data to be checked;
S4, corresponding micro services cloud platform inquiry Redis database whether there is data to be checked, enter step when to be Otherwise S5 enters step S6;
S5, corresponding micro services cloud platform obtain data to be checked and feed back to gateway;
S6, corresponding micro services cloud platform inquiry MySQL database whether there is data to be checked, enter step when to be Otherwise S7 enters step S8;
S7, corresponding micro services cloud platform obtain data to be checked and feed back to gateway
S8, information of the corresponding micro services cloud platform feedback without this data are to gateway.
7. the method for avoiding caching snowslide by multi-level buffer in micro services as claimed in claim 6, which is characterized in that After step S8, query result is broadcast to other regardless of whether inquiring data to be checked by corresponding micro services cloud platform Micro services cloud platform, other micro services cloud platforms cache the query result, the query result include data inquiry request and The result whether inquired.
8. the method for avoiding caching snowslide by multi-level buffer in micro services as claimed in claim 6, which is characterized in that should Service cloud platform slightly is all made of ConcurrentHashMap technology to store level cache data, every in level cache Data cached includes cache data content, data cached creation time, data cached expired time and data cached hit-count;
The micro services cloud platform data cached expired time that certain data cached in local level cache judges this when reaching Whether data cached data cached hit-count is lower than a given threshold, and it is being no that it is data cached, which to delete this when to be, When by the data cached data buffer storage new as one of this into local level cache.
9. the method for avoiding caching snowslide by multi-level buffer in micro services as claimed in claim 8, which is characterized in that should The data cached data cached expired time in different level caches of item is different, and/or, this is data cached slow in level-one It deposits different with the data cached expired time in L2 cache.
10. the method for avoiding caching snowslide by multi-level buffer in micro services as claimed in claim 6, which is characterized in that Kafka component is when receiving a micro services cloud platform more new data in those micro services cloud platforms, by this more new data It is synchronized in other micro services cloud platforms in those micro services cloud platforms.
CN201811595142.1A 2018-12-25 2018-12-25 The system and method for caching snowslide is avoided by multi-level buffer in micro services Pending CN109669960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811595142.1A CN109669960A (en) 2018-12-25 2018-12-25 The system and method for caching snowslide is avoided by multi-level buffer in micro services

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811595142.1A CN109669960A (en) 2018-12-25 2018-12-25 The system and method for caching snowslide is avoided by multi-level buffer in micro services

Publications (1)

Publication Number Publication Date
CN109669960A true CN109669960A (en) 2019-04-23

Family

ID=66147200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811595142.1A Pending CN109669960A (en) 2018-12-25 2018-12-25 The system and method for caching snowslide is avoided by multi-level buffer in micro services

Country Status (1)

Country Link
CN (1) CN109669960A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263010A (en) * 2019-05-31 2019-09-20 广东睿江云计算股份有限公司 A kind of cache file automatic update method and device
CN110413689A (en) * 2019-06-29 2019-11-05 苏州浪潮智能科技有限公司 A kind of the multinode method of data synchronization and device of memory database
CN110636341A (en) * 2019-10-25 2019-12-31 四川虹魔方网络科技有限公司 Large-concurrency supporting multi-level fine-grained caching mechanism launcher interface optimization method
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN110795457A (en) * 2019-09-24 2020-02-14 苏宁云计算有限公司 Data caching processing method and device, computer equipment and storage medium
CN110795484A (en) * 2019-10-25 2020-02-14 北京浪潮数据技术有限公司 Micro service equipment and data translation method, device and storage medium thereof
CN110837521A (en) * 2019-11-15 2020-02-25 北京金山云网络技术有限公司 Data query method and device and server
CN111026985A (en) * 2019-12-02 2020-04-17 北京齐尔布莱特科技有限公司 Short link generation method, device and server
CN111258928A (en) * 2020-01-13 2020-06-09 大汉软件股份有限公司 High-performance two-stage cache device for scale website application
CN111737298A (en) * 2020-06-19 2020-10-02 中国工商银行股份有限公司 Cache data control method and device based on distributed storage
CN111858669A (en) * 2020-07-03 2020-10-30 上海众言网络科技有限公司 Method and device for second-level caching of data
CN112559560A (en) * 2019-09-10 2021-03-26 北京京东振世信息技术有限公司 Metadata reading method and device, metadata updating method and device, and storage device
CN112699154A (en) * 2021-03-25 2021-04-23 上海洋漪信息技术有限公司 Multi-level caching method for large-flow data
CN112818019A (en) * 2021-01-29 2021-05-18 北京思特奇信息技术股份有限公司 Query request filtering method applied to Redis client and Redis client
CN113420052A (en) * 2021-07-08 2021-09-21 上海浦东发展银行股份有限公司 Multi-level distributed cache system and method
CN113726662A (en) * 2021-08-19 2021-11-30 成都民航西南凯亚有限责任公司 Micro-service routing and management system plug-in
CN115134134A (en) * 2022-06-23 2022-09-30 中国民航信息网络股份有限公司 Information processing method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106506605A (en) * 2016-10-14 2017-03-15 华南理工大学 A kind of SaaS application construction methods based on micro services framework
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system
CN107231305A (en) * 2017-05-05 2017-10-03 广东网金控股股份有限公司 A kind of route agent and buffer memory management method and device
CN108696579A (en) * 2018-04-28 2018-10-23 北京奇艺世纪科技有限公司 A kind of request responding method, device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107231395A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Date storage method, device and system
CN106506605A (en) * 2016-10-14 2017-03-15 华南理工大学 A kind of SaaS application construction methods based on micro services framework
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device
CN107231305A (en) * 2017-05-05 2017-10-03 广东网金控股股份有限公司 A kind of route agent and buffer memory management method and device
CN108696579A (en) * 2018-04-28 2018-10-23 北京奇艺世纪科技有限公司 A kind of request responding method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
遗忘角落: "如何处理缓存失效、缓存穿透、缓存并发等问题", 《HTTPS://WWW.CNBLOGS.COM/LINGSHAO/P/5658757.HTML》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263010A (en) * 2019-05-31 2019-09-20 广东睿江云计算股份有限公司 A kind of cache file automatic update method and device
CN110263010B (en) * 2019-05-31 2023-05-02 广东睿江云计算股份有限公司 Automatic updating method and device for cache file
CN110413689A (en) * 2019-06-29 2019-11-05 苏州浪潮智能科技有限公司 A kind of the multinode method of data synchronization and device of memory database
CN110413689B (en) * 2019-06-29 2022-04-26 苏州浪潮智能科技有限公司 Multi-node data synchronization method and device for memory database
CN112559560A (en) * 2019-09-10 2021-03-26 北京京东振世信息技术有限公司 Metadata reading method and device, metadata updating method and device, and storage device
CN110795457A (en) * 2019-09-24 2020-02-14 苏宁云计算有限公司 Data caching processing method and device, computer equipment and storage medium
CN110753099A (en) * 2019-10-12 2020-02-04 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN110753099B (en) * 2019-10-12 2023-09-29 平安健康保险股份有限公司 Distributed cache system and cache data updating method
CN110636341B (en) * 2019-10-25 2021-11-09 四川虹魔方网络科技有限公司 Large-concurrency supporting multi-level fine-grained caching mechanism launcher interface optimization method
CN110795484A (en) * 2019-10-25 2020-02-14 北京浪潮数据技术有限公司 Micro service equipment and data translation method, device and storage medium thereof
CN110636341A (en) * 2019-10-25 2019-12-31 四川虹魔方网络科技有限公司 Large-concurrency supporting multi-level fine-grained caching mechanism launcher interface optimization method
CN110837521A (en) * 2019-11-15 2020-02-25 北京金山云网络技术有限公司 Data query method and device and server
CN111026985A (en) * 2019-12-02 2020-04-17 北京齐尔布莱特科技有限公司 Short link generation method, device and server
CN111258928A (en) * 2020-01-13 2020-06-09 大汉软件股份有限公司 High-performance two-stage cache device for scale website application
CN111737298A (en) * 2020-06-19 2020-10-02 中国工商银行股份有限公司 Cache data control method and device based on distributed storage
CN111737298B (en) * 2020-06-19 2024-04-26 中国工商银行股份有限公司 Cache data management and control method and device based on distributed storage
CN111858669A (en) * 2020-07-03 2020-10-30 上海众言网络科技有限公司 Method and device for second-level caching of data
CN112818019A (en) * 2021-01-29 2021-05-18 北京思特奇信息技术股份有限公司 Query request filtering method applied to Redis client and Redis client
CN112818019B (en) * 2021-01-29 2024-02-02 北京思特奇信息技术股份有限公司 Query request filtering method applied to Redis client and Redis client
CN112699154A (en) * 2021-03-25 2021-04-23 上海洋漪信息技术有限公司 Multi-level caching method for large-flow data
CN113420052A (en) * 2021-07-08 2021-09-21 上海浦东发展银行股份有限公司 Multi-level distributed cache system and method
CN113726662B (en) * 2021-08-19 2023-02-10 成都民航西南凯亚有限责任公司 Micro-service routing and management system
CN113726662A (en) * 2021-08-19 2021-11-30 成都民航西南凯亚有限责任公司 Micro-service routing and management system plug-in
CN115134134A (en) * 2022-06-23 2022-09-30 中国民航信息网络股份有限公司 Information processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN109669960A (en) The system and method for caching snowslide is avoided by multi-level buffer in micro services
US7047301B2 (en) Method and system for enabling persistent access to virtual servers by an LDNS server
CN103347068B (en) A kind of based on Agent cluster network-caching accelerated method
US7209941B2 (en) System and method for distributing contents from a child server based on a client's current location
CN101257396B (en) System for distributing multi-field content based on P2P technique as well as corresponding method
CN103812849B (en) A kind of local cache update method, system, client and server
CN102289508B (en) Distributed cache array and data inquiry method thereof
CN105095313B (en) A kind of data access method and equipment
CN104811493B (en) The virtual machine image storage system and read-write requests processing method of a kind of network aware
CN102523256A (en) Content management method, device and system
CN106031130A (en) Content delivery network architecture with edge proxy
CN103596066B (en) Method and device for data processing
CN109151009B (en) CDN node distribution method and system based on MEC
CN107835437B (en) Dispatching method based on more cache servers and device
CN103179148B (en) A kind of processing method sharing adnexa in the Internet and system
CN103560959B (en) Method and device for selecting static route
CN103095727B (en) P2p resource location method
CN108153825A (en) Data access method and device
CN104378452A (en) Method, device and system for domain name resolution
CN113542058B (en) Data source returning method, server and storage medium
CN103338242A (en) Hybrid cloud storage system and method based on multi-level cache
CN104468853A (en) Domain name resolution method, server and system
CN103905538A (en) Neighbor cooperation cache replacement method in content center network
US11777852B1 (en) System and method for web service atomic transaction (WS-AT) affinity routing
CN109618003A (en) A kind of servers' layout method, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190423