CN114528049A - Method and system for realizing API call information statistics based on InfluxDB - Google Patents
Method and system for realizing API call information statistics based on InfluxDB Download PDFInfo
- Publication number
- CN114528049A CN114528049A CN202210152357.6A CN202210152357A CN114528049A CN 114528049 A CN114528049 A CN 114528049A CN 202210152357 A CN202210152357 A CN 202210152357A CN 114528049 A CN114528049 A CN 114528049A
- Authority
- CN
- China
- Prior art keywords
- data
- influxdb
- module
- api
- cache database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 39
- 230000008569 process Effects 0.000 claims description 20
- 230000010365 information processing Effects 0.000 claims description 9
- 238000012217 deletion Methods 0.000 claims description 4
- 230000037430 deletion Effects 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 abstract description 2
- 238000004220 aggregation Methods 0.000 abstract description 2
- 230000001360 synchronised effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 3
- 230000002688 persistence Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036316 preload Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44521—Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
- G06F9/44526—Plug-ins; Add-ons
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method and a system for realizing API calling information statistics based on InfluxDB, belonging to the field of API calling; the method comprises the following specific steps: s1, the issued API call information is stored in a cache database by utilizing the gateway kong; s2, regularly storing the calling information in the cache database into the InfluxDB; s3, using the interface of InfluxDB to inquire the calling condition of API; s4, directly storing the data of the processing specification into the InfluxDB in the plug-in; the invention provides a method for realizing API calling information statistics based on InfluxDB.A API management console creates an API, the calling information is stored in a cache database when the API is called, and then the calling information is stored in the InfluxDB from the cache database by a timing task; the storage of API calling information is realized based on InfluxDB, the storage is simpler and higher in performance than the storage of msql, the InfluxDB supports syntax and index serialization of class sql, automatic aggregation can be realized, and the method is quicker and more efficient in subsequent query.
Description
Technical Field
The invention discloses a method and a system for realizing API calling information statistics based on InfluxDB, and relates to the technical field of API calling.
Background
Kong is a highly available, easily extensible API Gateway project sourced by Mashape corporation based on the Nginx _ Lua module write. Since Kong is based on Nginx, it is possible to horizontally extend multiple Kong servers to handle large volumes of network requests by distributing the requests evenly to the individual servers through a pre-load balancing configuration.
Kong employs a plug-in mechanism for functional customization, with a set of plug-ins (which may be 0 or n) being executed during the life of the API request response cycle. Plug-ins are written using Lua, and there are currently several basic functions: HTTP basic authentication, key authentication, CORS (Cross-domain Resource Sharing), TCP, UDP, file logging, API request throttling, request forwarding, and nginx monitoring.
The cache database is one of the most popular NoSQL databases at present, is an open-source key-value pair storage database which is written by ANSI C, comprises a plurality of data structures, supports a network, is based on an internal memory and has optional persistence, and has the following characteristics:
Open source log-type and Key-Value database written by ANSI C language, complying with BSD protocol, supporting network, based on memory and persistence, and providing API of multiple languages
Compared with other database types, the cache database has the characteristics that:
After the lua-supporting script joins Kong, each client's request to the API will first arrive at Kong, which will then be proxied to the final API, and between the request and the response, Kong will execute any installed plug-ins, extending the API function set, and Kong effectively becomes the entry point for each API.
InfluxDB is an open source time-series data developed by InfluxData. The time sequence database is a database which is stored based on time, each piece of data has a time stamp, so the database is particularly suitable for storing the data which change along with the time, and the trend of the data which change along with the time can be analyzed after being processed by some tools. Infiluxdb is written by Go and focuses on high performance query and storage of time-ordered data. The InfluxDB is widely applied to scenes such as monitoring data of a storage system, real-time data of the IoT industry and the like.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method and a system for realizing API call information statistics based on InfluxDB, and the adopted technical scheme is as follows: a method for realizing API call information statistics based on InfluxDB comprises the following steps:
s1, the issued API call information is stored in a cache database by utilizing the gateway kong;
s2, storing the calling information in the cache database into the InfluxDB at regular time;
s3, using the interface of InfluxDB to inquire the calling condition of API;
s4 directly saves the data of the processing specification in the plug-in to the InfluxDB.
The specific steps of S2 for regularly saving the call information in the cache database to the infiluxdb are as follows:
s201, regularly caching data acquired as stored in a database by a timing task;
s202, the timing task stores the acquired data into the InfluxDB;
s203, the timing task judges whether the storage is successful or not, and the stored data is deleted successfully.
The specific steps of the S3 using the interface of infiluxdb to query the calling condition of the API are as follows:
s301, acquiring calling information of the API in batch from a cache of the cache database by using a timing task;
s302 uses sdk batches of InfluxDB to store data in InfluxDB, and deletes the calling information from the cache database after successful storage.
The specific steps of S301 obtaining the calling information of the API in batch from the cache of the cache database using the timing task are as follows:
s3011, calling and saving information by the API, wherein APIg: monitor is used as the beginning of the key in the plug-in;
s3012, acquiring a certain number of key sets to obtain cursors for scanning data of the cache database by using a scan command of the cache database;
s3013 repeats the above operations until the cursor value is zero, and ends the current task.
In S302, sdk batches of data of infiluxdb are stored in infiluxdb, and after successful storage, the specific steps of deleting the call information from the cache database are as follows:
s3021, judging whether other operation processes exist according to the value in the cache database;
s3022 writing identification data into the cache database;
s3023 identifying that the value of the data uses a timestamp;
and S3024, deleting the identification data in the cache database after the synchronization data of the current thread is completed.
A system for realizing API call information statistics based on InfluxDB specifically comprises a data cache module, an information storage module, a timing task module and a data processing module:
a data caching module: utilizing the gateway kong to store the issued API calling information into a cache database;
the information storage module: regularly storing the calling information in the cache database into the InfluxDB;
a timing task module: querying the calling condition of the API by using an interface of the InfluxDB;
a data processing module: and directly storing the data of the processing specification into the InfluxDB in the plug-in.
The information storage module specifically comprises a timing acquisition module, a timing storage module and a timing processing module:
a timing acquisition module: the timing task regularly caches the data stored in the database;
a timing saving module: the timing task stores the acquired data into the InfluxDB;
a timing processing module: and the timing task judges whether the storage is successful or not, and successfully deletes the stored data.
The timing task module specifically comprises an information acquisition module and an information processing module:
an information acquisition module: using a timing task to obtain calling information of the API in batch from a memory of a cache database;
an information processing module: sdk batches of data of the InfluxDB are used for storing the data into the InfluxDB, and after the data are successfully stored, the calling information is deleted from the cache database.
The information acquisition module specifically comprises a plug-in configuration module, a cursor processing module and a task processing module:
a plug-in configuration module: the API calls the saved information to use APIg: monitor as the beginning of key in the plug-in;
a cursor processing module: acquiring a certain number of key sets to obtain cursors for scanning data of the cache database by utilizing a scan command of the cache database;
a task processing module: and repeating the operations until the cursor value is zero, and ending the current task.
The information processing module specifically comprises a process judgment module, an identification writing module, an identification processing module and an identification deleting module:
a process judgment module: judging whether other operation processes exist according to the value in the cache database;
an identification writing module: writing identification data in a cache database;
an identification processing module: identifying a value of the data using a timestamp;
an identification deletion module: and deleting the identification data in the cache database after the current thread completes the data synchronization.
The invention has the beneficial effects that: the invention provides a method for realizing API calling information statistics based on InfluxDB, which comprises the steps that an API management console creates an API, calling information is stored into a cache database when the API is called, and then the calling information is stored into the InfluxDB from the cache database by a timing task; the storage of API calling information is realized based on InfluxDB, the storage is simpler and higher in performance than the storage of msql, the InfluxDB supports syntax and index serialization of class sql, automatic aggregation can be realized, and the method is quicker and more efficient in subsequent query.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a method embodiment of the present invention; FIG. 2 is a flow chart of a timed task execution force of an embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
The first embodiment is as follows:
a method for realizing API call information statistics based on InfluxDB comprises the following specific steps:
s1, the issued API call information is stored in a cache database by utilizing the gateway kong;
s2, storing the calling information in the cache database into the InfluxDB at regular time;
s3, using the interface of InfluxDB to inquire the calling condition of API;
s4, directly storing the data of the processing specification into the InfluxDB in the plug-in;
realizing that the calling condition of Kong API is stored in a cache function through Lua codes, taking Redis as a cache database and taking InfluxDB as a database for storing the calling condition of API; when the API of the user is called, the called condition is saved in a cache, and then the data in the cache is saved to the InfluxDB by using a timing task;
when a user needs to create an API on a console page, a plug-in does not need to be bound, and kong can execute the plug-in for storing calling conditions when the API is called to store data into a cache database;
cache format
Storing application and API call information using hash format
Further, the specific steps of S2 for regularly saving the call information in the cache database to the InfluxDB are as follows:
s201, regularly caching data acquired as stored in a database by a timing task;
s202, the timing task stores the acquired data into the InfluxDB;
s203, the timing task judges whether the storage is successful or not, and the stored data is deleted successfully;
acquiring calling information of an API in batch from a redis cache by using a timing task, storing data into the InfluxDB in sdk batch by using the InfluxDB, and deleting the calling information from the redis cache after the data are successfully stored; the data is processed into the standard data stored in the InfluxDB in the plug-in unit, and the data is directly stored without independently processing the data by a timing task;
further, the specific steps of S3 querying the calling condition of the API using the interface of the infiluxdb are as follows:
s301, acquiring calling information of the API in batch from a cache of the cache database by using a timing task;
s302, sdk batches of data of the InfluxDB are used for storing the data into the InfluxDB, and after the data are successfully stored, the calling information is deleted from the cache database;
creating a timed task by using the note @ Scheduled in the SpringBoot, wherein the execution frequency of the timed task is 1 minute
@Scheduled(cron="0 0/1***?")
Further, the step S301 of obtaining the calling information of the API in batch from the cache of the cache database by using the timing task includes:
s3011, calling and saving information by the API, wherein APIg: monitor is used as the beginning of the key in the plug-in;
s3012, acquiring a certain number of key sets to obtain cursors of the scanning data of the cache database by means of scan commands of the cache database;
s3013, repeating the above operations until the cursor value is zero, and ending the current task;
API calls and stores information, uses APIg (android package) monitor as the beginning of a key in the plug-in, uses the scan command of redis to fetch a certain number of key sets each time, and simultaneously obtains the cursor of redis scanning data, wherein the cursor is the key value of the scanning data, thereby ensuring that repeated data cannot be obtained
By using the pipeline characteristic of the redis, the API call storage information can be obtained in batch according to the key set obtained from the redis only by one request;
the above operations are repeated continuously until the value of the cursor is 0, which indicates that there is no unsynchronized API call information data in redis, and the current task can be ended;
still further, in S302, sdk batches of data of the infiluxdb are stored in the infiluxdb, and the specific steps of deleting the call information from the cache database after successful storage are as follows:
s3021, judging whether other operation processes exist according to the value in the cache database;
s3022 writing identification data into the cache database;
s3023 identifying that the value of the data uses a timestamp;
s3024, deleting the identification data in the cache database after the current thread completes the synchronization data;
judging whether other processes are operated or not according to the values in the redis when the task starts, if other processes finish the current task immediately in the synchronous data and wait for the next task period to be executed again, and if no other processes are in the synchronous data, performing the following process;
writing a group of data into the redis when the task starts, wherein the data indicate that a thread is in synchronous data at present, and other threads do not need to operate;
the identification data is as follows
Key APIg:kong:monitor
Value time stamp
The value of the identification data uses the timestamp to prevent the problem that the exception occurs in the execution process of the current thread to cause data with different steps for a long time
Deleting identification data in redis by the current thread after the synchronous data is completed
The process lock is used for ensuring that only one process operates data, ensuring that the data processing is not disordered and ensuring the validity and the safety of the data;
each piece of API calling information data obtained from the redis is processed into a standard InfluxDB insertion statement in the plug-in unit without being processed again;
establishing connection of InfluxDB, and specifying a database name and a storage strategy, wherein the storage strategy is established together when a database is established, and the storage strategy must be specified when the connection is established, and storing API call information data acquired from redis by using a method for storing data in batches in an SDK;
after the storage is successful, the just created InfluxDB connection is closed, so that the waste of InfluxDB resources is avoided, and the influence on the InfluxDB query is avoided;
API call information data acquired from reis needs to be deleted from redis after being synchronized into InfluxDB, and the data are ensured to be synchronized only once
And deleting the keys at one time by using a transactional calling mode of redis, and rolling back the transaction if the deletion fails to ensure the integrity of the data.
Example two:
a system for realizing API call information statistics based on InfluxDB specifically comprises a data cache module, an information storage module, a timing task module and a data processing module:
a data caching module: utilizing the gateway kong to store the issued API calling information into a cache database;
the information storage module: regularly storing the calling information in the cache database into the InfluxDB;
a timing task module: querying the calling condition of the API by using an interface of the InfluxDB;
a data processing module: directly storing the data of the processing specification into the InfluxDB in the plug-in;
further, the information storage module specifically includes a timing acquisition module, a timing storage module, and a timing processing module:
a timing acquisition module: the timing task regularly caches the data stored in the database;
a timing saving module: the timing task stores the acquired data into the InfluxDB;
a timing processing module: the timing task judges whether the storage is successful or not, and the stored data is deleted successfully;
further, the timed task module specifically includes an information acquisition module and an information processing module:
an information acquisition module: using a timing task to obtain calling information of the API in batch from a memory of a cache database;
an information processing module: sdk batches of data of the InfluxDB are stored into the InfluxDB, and the calling information is deleted from the cache database after the data are successfully stored;
further, the information acquisition module specifically includes a plug-in configuration module, a cursor processing module and a task processing module:
a plug-in configuration module: the API calls the saved information to use APIg: monitor as the beginning of key in the plug-in;
and a cursor processing module: utilizing a scan command of the cache database to obtain a cursor of the scanning data of the cache database by taking a certain number of key sets;
a task processing module: repeating the above operations until the cursor value is zero, and ending the current task;
still further, the information processing module specifically includes a process judgment module, an identifier writing module, an identifier processing module, and an identifier deleting module:
a process judgment module: judging whether other operation processes exist according to the value in the cache database;
an identification writing module: writing identification data in a cache database;
an identification processing module: identifying a value of the data using a timestamp;
an identification deletion module: and deleting the identification data in the cache database after the current thread completes the data synchronization.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for realizing API call information statistics based on InfluxDB is characterized by comprising the following steps:
s1, the issued API call information is stored in a cache database by utilizing the gateway kong;
s2, storing the calling information in the cache database into the InfluxDB at regular time;
s3, using the interface of InfluxDB to inquire the calling condition of API;
s4 directly saves the data of the processing specification in the plug-in to the InfluxDB.
2. The method as claimed in claim 1, wherein the step of S2 storing the call information in the cache database to the ifluxdb at regular time includes the following steps:
s201, regularly caching data acquired as stored in a database by a timing task;
s202, the timing task stores the acquired data into the InfluxDB;
s203, the timing task judges whether the storage is successful or not, and the stored data is deleted successfully.
3. The method as claimed in claim 2, wherein the step of S3 using the interface of infiluxdb to query the calling status of API includes the following steps:
s301, acquiring calling information of the API in batch from a cache of the cache database by using a timing task;
s302 uses sdk batches of InfluxDB to store data in InfluxDB, and deletes the calling information from the cache database after successful storage.
4. The method as claimed in claim 3, wherein the step S301 of using the timing task to obtain the calling information of the API from the cache of the cache database in batch comprises the following steps:
s3011, calling and saving information by the API, wherein APIg: monitor is used as the beginning of the key in the plug-in;
s3012, acquiring a certain number of key sets to obtain cursors of the scanning data of the cache database by means of scan commands of the cache database;
s3013 repeats the above operations until the cursor value is zero, and ends the current task.
5. The method as claimed in claim 4, wherein the step S302 of using sdk batches of infiluxdb to store data in infiluxdb, and the specific steps of deleting the call information from the cache database after successful storage are as follows:
s3021 judging whether other operation processes exist according to the value in the cache database;
s3022 writing identification data into the cache database;
s3023 identifying that the value of the data uses a timestamp;
and S3024, deleting the identification data in the cache database after the synchronization data of the current thread is completed.
6. A system for realizing API call information statistics based on InfluxDB is characterized by specifically comprising a data cache module, an information storage module, a timing task module and a data processing module:
a data caching module: utilizing the gateway kong to store the issued API calling information into a cache database;
the information storage module: regularly storing the calling information in the cache database into the InfluxDB;
a timing task module: querying the calling condition of the API by using an interface of the InfluxDB;
a data processing module: and directly storing the data of the processing specification into the InfluxDB in the plug-in.
7. The system according to claim 6, wherein the information storage module specifically comprises a timing acquisition module, a timing storage module and a timing processing module:
a timing acquisition module: the timing task regularly caches the data stored in the database;
a timing saving module: the timing task stores the acquired data into the InfluxDB;
a timing processing module: and the timing task judges whether the storage is successful or not, and successfully deletes the stored data.
8. The system as claimed in claim 7, wherein the timed task module specifically comprises an information acquisition module and an information processing module:
an information acquisition module: using a timing task to obtain calling information of the API in batch from a memory of a cache database;
an information processing module: sdk batches of data of the InfluxDB are used for storing the data into the InfluxDB, and after the data are successfully stored, the calling information is deleted from the cache database.
9. The system according to claim 8, wherein the information acquisition module comprises a plug-in configuration module, a cursor processing module and a task processing module:
a plug-in configuration module: the API calls the saved information to use APIg: monitor as the beginning of key in the plug-in;
a cursor processing module: acquiring a certain number of key sets to obtain cursors for scanning data of the cache database by utilizing a scan command of the cache database;
a task processing module: and repeating the operations until the cursor value is zero, and ending the current task.
10. The system as claimed in claim 9, wherein the information processing module specifically includes a process determining module, an identifier writing module, an identifier processing module, and an identifier deleting module:
a process judgment module: judging whether other operation processes exist according to the value in the cache database;
an identification writing module: writing identification data in a cache database;
an identification processing module: identifying a value of the data using a timestamp;
an identification deletion module: and deleting the identification data in the cache database after the current thread completes the data synchronization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152357.6A CN114528049A (en) | 2022-02-18 | 2022-02-18 | Method and system for realizing API call information statistics based on InfluxDB |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210152357.6A CN114528049A (en) | 2022-02-18 | 2022-02-18 | Method and system for realizing API call information statistics based on InfluxDB |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114528049A true CN114528049A (en) | 2022-05-24 |
Family
ID=81623080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210152357.6A Pending CN114528049A (en) | 2022-02-18 | 2022-02-18 | Method and system for realizing API call information statistics based on InfluxDB |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114528049A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115048390A (en) * | 2022-08-16 | 2022-09-13 | 国能日新科技股份有限公司 | Data storage method and device based on influxdb |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653356A (en) * | 2016-01-05 | 2016-06-08 | 世纪禾光科技发展(北京)有限公司 | Method and device for processing multi-server concurrent operation |
CN105930491A (en) * | 2016-04-28 | 2016-09-07 | 安徽四创电子股份有限公司 | Monitoring data storage method based on time sequence database InfluxDB |
CN109800129A (en) * | 2019-01-17 | 2019-05-24 | 青岛特锐德电气股份有限公司 | A kind of real-time stream calculation monitoring system and method for processing monitoring big data |
KR102027823B1 (en) * | 2019-04-24 | 2019-10-02 | 주식회사 리앙커뮤니케이션즈 | Intelligent caching system with improved system response performance based on plug in method |
CN111556023A (en) * | 2020-03-31 | 2020-08-18 | 紫光云技术有限公司 | Authority-based content configurable method |
CN112818325A (en) * | 2021-01-30 | 2021-05-18 | 浪潮云信息技术股份公司 | Method for realizing API gateway independent authentication based on application |
-
2022
- 2022-02-18 CN CN202210152357.6A patent/CN114528049A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653356A (en) * | 2016-01-05 | 2016-06-08 | 世纪禾光科技发展(北京)有限公司 | Method and device for processing multi-server concurrent operation |
CN105930491A (en) * | 2016-04-28 | 2016-09-07 | 安徽四创电子股份有限公司 | Monitoring data storage method based on time sequence database InfluxDB |
CN109800129A (en) * | 2019-01-17 | 2019-05-24 | 青岛特锐德电气股份有限公司 | A kind of real-time stream calculation monitoring system and method for processing monitoring big data |
KR102027823B1 (en) * | 2019-04-24 | 2019-10-02 | 주식회사 리앙커뮤니케이션즈 | Intelligent caching system with improved system response performance based on plug in method |
CN111556023A (en) * | 2020-03-31 | 2020-08-18 | 紫光云技术有限公司 | Authority-based content configurable method |
CN112818325A (en) * | 2021-01-30 | 2021-05-18 | 浪潮云信息技术股份公司 | Method for realizing API gateway independent authentication based on application |
Non-Patent Citations (2)
Title |
---|
CALHAN, ALI: "EHealth monitoring testb e d with fuzzy based early warning score system", <COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE>, vol. 202, 25 February 2021 (2021-02-25) * |
孟宇;陈峰;郝晓东;: "C#结合InfluxDB在工业中的应用", 冶金自动化, no. 1, 15 August 2020 (2020-08-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115048390A (en) * | 2022-08-16 | 2022-09-13 | 国能日新科技股份有限公司 | Data storage method and device based on influxdb |
CN115048390B (en) * | 2022-08-16 | 2022-11-01 | 国能日新科技股份有限公司 | Data storage method and device based on influxdb |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100625595B1 (en) | Parallel Logging Method of Transaction Processing System | |
US8560502B2 (en) | Method and a system for replaying database workload with transactional consistency | |
CN109829011B (en) | Data synchronization method and device for distributed heterogeneous database | |
US20090125563A1 (en) | Replicating and sharing data between heterogeneous data systems | |
EP2947582A1 (en) | Computing device and method for executing database operation command | |
CN111177161B (en) | Data processing method, device, computing equipment and storage medium | |
CN111367925A (en) | Data dynamic real-time updating method, device and storage medium | |
CN111447102B (en) | SDN network device access method and device, computer device and storage medium | |
WO2018035799A1 (en) | Data query method, application and database servers, middleware, and system | |
CN109086382B (en) | Data synchronization method, device, equipment and storage medium | |
US20130117414A1 (en) | Dynamic Interface to Read Database Through Remote Procedure Call | |
CN112286941A (en) | Big data synchronization method and device based on Binlog + HBase + Hive | |
CN112307119A (en) | Data synchronization method, device, equipment and storage medium | |
CN109842621A (en) | A kind of method and terminal reducing token storage quantity | |
CN111107022B (en) | Data transmission optimization method, device and readable storage medium | |
CN113076304A (en) | Distributed version management method, device and system | |
US8600990B2 (en) | Interacting methods of data extraction | |
CN113032421A (en) | MongoDB-based distributed transaction processing system and method | |
US7958083B2 (en) | Interacting methods of data summarization | |
CN113886485A (en) | Data processing method, device, electronic equipment, system and storage medium | |
CN114528049A (en) | Method and system for realizing API call information statistics based on InfluxDB | |
CN113438275B (en) | Data migration method and device, storage medium and data migration equipment | |
US11921708B1 (en) | Distributed execution of transactional queries | |
US20120005175A1 (en) | Method and a system for real time replaying of database workload at a fixed initial time delay | |
US20060059176A1 (en) | Suspending a result set and continuing from a suspended result set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |