CN115174502A - Flow control method, device, equipment and medium of API gateway - Google Patents
Flow control method, device, equipment and medium of API gateway Download PDFInfo
- Publication number
- CN115174502A CN115174502A CN202210768112.6A CN202210768112A CN115174502A CN 115174502 A CN115174502 A CN 115174502A CN 202210768112 A CN202210768112 A CN 202210768112A CN 115174502 A CN115174502 A CN 115174502A
- Authority
- CN
- China
- Prior art keywords
- api
- level cache
- flow control
- calling
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- RYYVLZVUVIJVGH-UHFFFAOYSA-N caffeine Chemical compound CN1C(=O)N(C)C(=O)C2=C1N=CN2C RYYVLZVUVIJVGH-UHFFFAOYSA-N 0.000 claims description 21
- LPHGQDQBBGAPDZ-UHFFFAOYSA-N Isocaffeine Natural products CN1C(=O)N(C)C(=O)C2=C1N(C)C=N2 LPHGQDQBBGAPDZ-UHFFFAOYSA-N 0.000 claims description 9
- 229960001948 caffeine Drugs 0.000 claims description 9
- VJEONQKOZGKCAK-UHFFFAOYSA-N caffeine Natural products CN1C(=O)N(C)C(=O)C2=C1C=CN2C VJEONQKOZGKCAK-UHFFFAOYSA-N 0.000 claims description 9
- 238000011217 control strategy Methods 0.000 claims description 5
- 238000013500 data storage Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 241000508269 Psidium Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9021—Plurality of buffers per packet
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a flow control method of an API gateway, which relates to the technical field of gateways and solves the technical problem that the stability of the API gateway is influenced when the calling flow of an API is overlarge, and the method comprises the following steps: setting two levels of caches; setting the flow control times and the information of a server side in the calling information of the API; data is sent to a first-level cache; and when the API is called, the key value of the calling information is firstly obtained in the second-level cache, if the API information called this time cannot be obtained, the key value of the calling information is obtained in the first-level cache, and a piece of data is cached to the second-level cache. The invention also discloses a flow control device, equipment and a medium of the API gateway. According to the invention, through the implementation mode of multi-level cache and message queue, data cache calling and flow control are carried out, the cost of server data storage is reduced, the stability of the API gateway is improved, and higher concurrent access is supported.
Description
Technical Field
The present invention relates to the field of gateway technologies, and in particular, to a method, an apparatus, a device, and a medium for controlling a flow of an API gateway.
Background
Most API gateway projects face when the data flow is large at present, a certain strategy is adopted to guarantee stable operation of application, for example, a memory cache such as redis is adopted, data can be read out and put into a memory, and therefore when the data needs to be obtained, the data can be directly taken from the memory and returned, and the speed can be improved to a great extent.
However, since the common redis is deployed individually into a cluster, there is consumption on network IO, although the link to the redis cluster already has a tool of a connection pool, there is still a certain consumption on data transmission, but such storage technologies are all centralized caching technologies, and in the case of large-flow access, high-frequency network access is required, which brings bandwidth bottleneck and network delay, and affects the stability of the API gateway.
Disclosure of Invention
The technical problem to be solved by the present invention is to solve the above-mentioned deficiencies of the prior art, and an object of the present invention is to provide a method, an apparatus, a device and a medium for controlling flow of an API gateway, which can solve the problems of bandwidth bottleneck and network delay, and affecting the stability of the API gateway.
The invention provides a flow control method of an API gateway, which comprises the following steps:
setting two levels of caches;
setting the flow control times and the information of a server side in the calling information of the API;
data is sent to a first-level cache;
and when the API is called, the key value of the calling information is firstly obtained in the second-level cache, if the API information called this time cannot be obtained, the key value of the calling information is obtained in the first-level cache, and a piece of data is cached to the second-level cache.
As a further improvement, the first-level cache is a redis cache, and the second-level cache is a caffeine cache.
Further, key values are evicted from the caffeine cache based on a temporal policy.
Further, the information of the server includes a server Ip, a server Port, a server TargetBasePath, and a server Resource.
Further, the API gateway reports calling log information to the API data monitoring center through the message queue, the API data monitoring center monitors the success rate of calling data and the calling times, and if the calling times exceed the times agreed by the current API, the calling times of the current API through the message queue exceed the limit.
Furthermore, the API gateway is used as a message that the number of times of consuming the current API call by the consumer exceeds the limit, modifies the flow control strategy in the second-level cache, starts the flow control and limits the access of the API.
Further, the message queue is a kafka message queue.
The invention provides a flow control device of an API gateway, comprising:
the setting module is used for setting the flow control times and the information of the server side in the calling information of the API;
the data issuing module is used for issuing data to the first-level cache;
the first-level cache and the second-level cache are used for acquiring key values of the calling information in the second-level cache during API calling, and if the API information called this time cannot be acquired, the key values of the calling information are acquired in the first-level cache, and a part of data is cached into the second-level cache;
the message queue is used for reporting calling log information to the API data monitoring center through the message queue by the API gateway, the API data monitoring center monitors the success rate of calling data and the calling times, and if the calling times exceed the times agreed by the current API, the calling times of the current API are issued through the message queue and exceed the limit; and the API gateway is used as a message that the number of times of consuming the current API call by the consumer exceeds the limit, modifies the flow control strategy in the second-level cache, starts the flow control and limits the access of the API.
The invention provides an electronic device, the device comprising a processor and a memory: the memory is used for storing program codes and transmitting the program codes to the processor; the processor is used for executing the flow control method of the API gateway according to the instructions in the program codes.
The present invention provides a computer-readable storage medium for storing program code for executing a flow control method of an API gateway as described above.
Advantageous effects
Compared with the prior art, the invention has the advantages that:
the invention adopts the cafeine as the secondary cache, can directly read data from the memory, and frequently reads and writes the flow control data, has high speed and high efficiency compared with IO operation, reduces the dependence on redis, reduces the cost of data storage of the server, and avoids influencing the API gateway when the irresistance occurs at the redis server side. The API gateway and the API monitoring center carry out asynchronous interaction through a kafka message queue, statistics of calling times and success rate statistics are reported through logs, and once abnormal flow exceeds a limit or the success rate is too low, the API gateway is informed to start flow control through the message queue.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the deployment of two-level caches in the present invention;
fig. 3 is a flowchart of reporting call log information in the present invention.
Detailed Description
The invention will be further described with reference to specific embodiments shown in the drawings.
Referring to fig. 1 to 3, a method for controlling flow of an API gateway includes:
setting two levels of caches;
setting the flow control times and the information of a server side in the calling information of the API;
data is sent to a first-level cache;
and when the API is called, the key value of the calling information is firstly obtained in the second-level cache, if the API information called this time cannot be obtained, the key value of the calling information is obtained in the first-level cache, and a piece of data is cached to the second-level cache.
In this embodiment, the first level cache is a redis cache, and the second level cache is a caffeine cache. Namely, redis + caffeine is adopted to cache data, and redis is a very excellent cache database and has the following four advantages: 1. the performance is extremely high, the reading speed of Redis is 110000 times/s, and the writing speed is 81000 times/s; 2. abundant data types, redis supports Strings, lists, hashes, sets and Ordered Sets data type operations of binary cases; 3. atomic, all operations of Redis are atomic, meaning either successful execution or complete failure, a single operation is atomic, multiple operations also support transactions, i.e., atomicity, wrapped by the MULTI and EXEC instructions; 4. rich features, redis also supports publish/subscribe, notify, key expiration, etc.
However, generally, the redis is deployed individually as a cluster, so that consumption on network IO is caused, and although a tool of a connection pool already exists for linking the redis cluster, certain consumption still exists on data transmission, but such storage technologies are all centralized caching technologies, and in the case of large-flow access, high-frequency network access is required, which brings bandwidth bottleneck and network delay. So the in-application cache is increased, such as: caffeine. When the cache in the application has data meeting the conditions, the data can be directly used without being acquired to the redis through a network, so that two-level cache is formed.
Caffeine is a high-performance cache library, and is the best (optimal) cache framework based on Java 8. Cache (Cache), based on Google Guava, caffeine, provides an in-memory Cache similar to but not identical to ConcurrentMap. The most basic difference is that ConcurrentMap holds all elements added to it until they are explicitly deleted. On the other hand, caches are typically configured to automatically delete entries to limit their memory footprint, since it is automatically cache loaded, and cafeine provides a cache eviction based on capacity, time, and references. The LRU algorithm is adopted in the capacity-based mode, and a garbage recycling mechanism of the Java virtual machine is well utilized based on reference recycling.
Based on the implementation mode of two-level cache, the expiration time can be adjusted according to the memory capacity, and the processing efficiency of the application can be greatly improved for the items of the API gateway which frequently read the same data.
In this embodiment, key values are evicted from the caffeine cache based on a temporal policy. The information of the server comprises a server Ip, a server Port, a server targetBasePath and a server Resource.
The API data monitoring center monitors the success rate of calling data and the calling times, and if the calling times exceed the times agreed by the current API, the calling times of the current API are issued through the message queue and exceed the limit. And the API gateway is used as a message that the number of times of consuming the current API call by the consumer exceeds the limit, modifies the flow control strategy in the second-level cache, starts the flow control and limits the access of the API.
Preferably, the message queue is a kafka message queue.
The kafka is adopted as the message middleware, is a distributed, partitioned, multi-copy, multi-subscriber and coordinated distributed log system (which can also be used as an MQ system), can be commonly used for web/nginx logs, access logs, message services and the like, and when a monitoring center of the API gateway detects that the API call flow is too large, the kafka is used for sending the API call times to the API gateway application to exceed the limit, and through the technical integration of multi-level cache and message queue, higher concurrent access can be supported, and the stability of the API gateway is improved.
A flow control device of an API gateway, comprising:
the setting module is used for setting the flow control times and the information of the server side in the calling information of the API;
the data issuing module is used for issuing data to the first-level cache;
the first-level cache and the second-level cache are used for acquiring key values of the calling information in the second-level cache during API calling, and if the API information called this time cannot be acquired, the key values of the calling information are acquired in the first-level cache, and a part of data is cached into the second-level cache;
the message queue is used for reporting calling log information to the API data monitoring center through the message queue by the API gateway, the API data monitoring center monitors the success rate of calling data and the calling times, and if the calling times exceed the times agreed by the current API, the calling times of the current API are issued through the message queue and exceed the limit; and the API gateway is used as a message that the number of times of consuming the current API call by the consumer exceeds the limit, modifies the flow control strategy in the second-level cache, starts the flow control and limits the access of the API.
An electronic device, the device comprising a processor and a memory: the memory is used for storing the program codes and transmitting the program codes to the processor; the processor is used for executing the flow control method of the API gateway according to the instructions in the program codes.
A computer readable storage medium for storing program code for executing a flow control method of an API gateway as described above.
The invention caches the data into the two-level cache and performs the eviction based on the time strategy, thereby protecting the application, avoiding the influence of the inequality force on the middleware and the application, and forming the flow control mechanism of the application.
The above is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that several variations and modifications can be made without departing from the structure of the present invention, which will not affect the effect of the implementation of the present invention and the utility of the patent.
Claims (10)
1. A flow control method of an API gateway is characterized by comprising the following steps:
setting two levels of caches;
setting the flow control times and the information of a server side in the calling information of the API;
data is sent to a first-level cache;
and when the API is called, the key value of the calling information is firstly obtained in the second-level cache, if the API information called this time cannot be obtained, the key value of the calling information is obtained in the first-level cache, and a piece of data is cached to the second-level cache.
2. The method according to claim 1, wherein the first level cache is a redis cache, and the second level cache is a cafeine cache.
3. The method of claim 1, wherein the key value is evicted from the caffeine cache based on a time policy.
4. The method for controlling the flow of the API gateway as recited in claim 1, wherein the information of the server includes a server Ip, a server Port, a server TargetBasePath, and a server Resource.
5. The flow control method of the API gateway according to any of claims 1-4, wherein the API gateway reports the call log information to the API data monitoring center through the message queue, the API data monitoring center monitors the success rate of the call data and the number of calls, and if the number of calls exceeds the number agreed by the current API, the number of times of issuing the current API call through the message queue exceeds the limit.
6. The method as claimed in claim 5, wherein the API gateway modifies the flow control policy in the second level cache as a message that the number of times the current API call is consumed by the consumer exceeds the limit, and opens the flow control to limit the access of the API.
7. The method of claim 5, wherein the message queue is a kafka message queue.
8. A flow control apparatus of an API gateway, comprising:
the setting module is used for setting the flow control times and the information of the server side in the calling information of the API;
the data issuing module is used for issuing data to the first-level cache;
the first level cache and the second level cache are used for acquiring key values of the calling information in the second level cache during API calling, and if the API information called this time cannot be acquired, the key values of the calling information are acquired in the first level cache, and one part of data is cached to the second level cache;
the message queue is used for reporting calling log information to the API data monitoring center through the message queue by the API gateway, the API data monitoring center monitors the success rate of calling data and the calling times, and if the calling times exceed the times agreed by the current API, the calling times of the current API are issued through the message queue and exceed the limit; and the API gateway is used as a message that the number of times of consuming the current API call by a consumer exceeds the limit, modifies the flow control strategy in the second-level cache, starts the flow control and limits the access of the API.
9. An electronic device, comprising a processor and a memory: the memory is used for storing program codes and transmitting the program codes to the processor; the processor is configured to execute the flow control method of the API gateway according to any one of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium for storing program code for executing a flow control method of an API gateway according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210768112.6A CN115174502A (en) | 2022-06-30 | 2022-06-30 | Flow control method, device, equipment and medium of API gateway |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210768112.6A CN115174502A (en) | 2022-06-30 | 2022-06-30 | Flow control method, device, equipment and medium of API gateway |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115174502A true CN115174502A (en) | 2022-10-11 |
Family
ID=83489799
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210768112.6A Pending CN115174502A (en) | 2022-06-30 | 2022-06-30 | Flow control method, device, equipment and medium of API gateway |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115174502A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115618842A (en) * | 2022-12-15 | 2023-01-17 | 浙江蓝鸽科技有限公司 | Integrated intelligent campus data center system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170344481A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Invalidation and refresh of multi-tier distributed caches |
US20180113790A1 (en) * | 2016-10-20 | 2018-04-26 | Cisco Technology, Inc. | Agentless distributed monitoring of microservices through a virtual switch |
CN108259269A (en) * | 2017-12-30 | 2018-07-06 | 上海陆家嘴国际金融资产交易市场股份有限公司 | The monitoring method and system of the network equipment |
CN109241767A (en) * | 2018-08-02 | 2019-01-18 | 浪潮软件集团有限公司 | Security control system and method for unstructured data resources |
CN109739727A (en) * | 2019-01-03 | 2019-05-10 | 优信拍(北京)信息科技有限公司 | Service monitoring method and device in micro services framework |
US20190146967A1 (en) * | 2017-11-15 | 2019-05-16 | Sumo Logic | Logs to metrics synthesis |
CN110069419A (en) * | 2018-09-04 | 2019-07-30 | 中国平安人寿保险股份有限公司 | Multilevel cache system and its access control method, equipment and storage medium |
CN111290865A (en) * | 2020-02-10 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Service calling method and device, electronic equipment and storage medium |
CN111737297A (en) * | 2020-06-15 | 2020-10-02 | 中国工商银行股份有限公司 | Method and device for processing link aggregation call information |
CN112367321A (en) * | 2020-11-10 | 2021-02-12 | 苏州万店掌网络科技有限公司 | Method for quickly constructing service call and middle station API gateway |
-
2022
- 2022-06-30 CN CN202210768112.6A patent/CN115174502A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170344481A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Invalidation and refresh of multi-tier distributed caches |
US20180113790A1 (en) * | 2016-10-20 | 2018-04-26 | Cisco Technology, Inc. | Agentless distributed monitoring of microservices through a virtual switch |
US20190146967A1 (en) * | 2017-11-15 | 2019-05-16 | Sumo Logic | Logs to metrics synthesis |
CN108259269A (en) * | 2017-12-30 | 2018-07-06 | 上海陆家嘴国际金融资产交易市场股份有限公司 | The monitoring method and system of the network equipment |
CN109241767A (en) * | 2018-08-02 | 2019-01-18 | 浪潮软件集团有限公司 | Security control system and method for unstructured data resources |
CN110069419A (en) * | 2018-09-04 | 2019-07-30 | 中国平安人寿保险股份有限公司 | Multilevel cache system and its access control method, equipment and storage medium |
CN109739727A (en) * | 2019-01-03 | 2019-05-10 | 优信拍(北京)信息科技有限公司 | Service monitoring method and device in micro services framework |
CN111290865A (en) * | 2020-02-10 | 2020-06-16 | 腾讯科技(深圳)有限公司 | Service calling method and device, electronic equipment and storage medium |
CN111737297A (en) * | 2020-06-15 | 2020-10-02 | 中国工商银行股份有限公司 | Method and device for processing link aggregation call information |
CN112367321A (en) * | 2020-11-10 | 2021-02-12 | 苏州万店掌网络科技有限公司 | Method for quickly constructing service call and middle station API gateway |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115618842A (en) * | 2022-12-15 | 2023-01-17 | 浙江蓝鸽科技有限公司 | Integrated intelligent campus data center system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5631373B2 (en) | Probabilistic techniques for matching cache entries | |
CN108694075B (en) | Method and device for processing report data, electronic equipment and readable storage medium | |
CN113094392B (en) | Data caching method and device | |
US9563531B2 (en) | Storage of mass data for monitoring | |
US20140025898A1 (en) | Cache replacement for shared memory caches | |
US20240345776A1 (en) | Data processing method and system based on multi-level cache | |
CN108471385B (en) | Flow control method and device for distributed system | |
US20230224209A1 (en) | Adaptive time window-based log message deduplication | |
CN111124270A (en) | Method, apparatus and computer program product for cache management | |
CN115174502A (en) | Flow control method, device, equipment and medium of API gateway | |
US6973536B1 (en) | Self-adaptive hybrid cache | |
CN108519987A (en) | A kind of data persistence method and apparatus | |
CN113360577A (en) | MPP database data processing method, device, equipment and storage medium | |
CN114528068B (en) | Method for eliminating cold start of server-free computing container | |
CN110413689B (en) | Multi-node data synchronization method and device for memory database | |
CN112241418B (en) | Distributed database preprocessing method, agent layer, system and storage medium | |
CN113742131A (en) | Method, electronic device and computer program product for storage management | |
CN111666045A (en) | Data processing method, system, equipment and storage medium based on Git system | |
CN117056246A (en) | Data caching method and system | |
CN114817090B (en) | MCU communication management method and system with low RAM consumption | |
CN116226151A (en) | Method and device for storing, reading and deleting data | |
CN107577618B (en) | Three-path balanced cache elimination method and device | |
CN112600941B (en) | Method, device and storage medium for automatically updating transmission data size optimization | |
CN113253922B (en) | Cache management method, device, electronic equipment and computer readable storage medium | |
Zagarese et al. | Enabling advanced loading strategies for data intensive web services |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |