CN110889053A - Interface data caching method and device and computing equipment - Google Patents

Interface data caching method and device and computing equipment Download PDF

Info

Publication number
CN110889053A
CN110889053A CN201911097008.3A CN201911097008A CN110889053A CN 110889053 A CN110889053 A CN 110889053A CN 201911097008 A CN201911097008 A CN 201911097008A CN 110889053 A CN110889053 A CN 110889053A
Authority
CN
China
Prior art keywords
interface
memory
interface address
data
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911097008.3A
Other languages
Chinese (zh)
Other versions
CN110889053B (en
Inventor
石翠宁
尚国军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cheerbright Technologies Co Ltd
Original Assignee
Beijing Cheerbright Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cheerbright Technologies Co Ltd filed Critical Beijing Cheerbright Technologies Co Ltd
Priority to CN201911097008.3A priority Critical patent/CN110889053B/en
Publication of CN110889053A publication Critical patent/CN110889053A/en
Application granted granted Critical
Publication of CN110889053B publication Critical patent/CN110889053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9566URL specific, e.g. using aliases, detecting broken or misspelled links
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a device and a computing device for caching interface data, wherein the computing device is connected with a cache server, the cache server comprises a first memory, a second memory and an interface address queue, the first memory is suitable for storing the incidence relation between an interface address and the interface data, the second memory is suitable for storing the incidence relation between the interface address and the next update time, and the method comprises the following steps: according to a preset time interval, acquiring an interface address of which the next updating time is less than the current time from a second memory, inserting the acquired interface address into an interface address queue, and deleting a data entry corresponding to the acquired interface address from the second memory; taking out the interface address from the interface address queue and accessing the interface address to obtain corresponding interface data; the interface data obtained by accessing the interface address is updated into the first memory, and the interface address and the newly set next update time are stored into the second memory in association with each other.

Description

Interface data caching method and device and computing equipment
Technical Field
The present invention relates to the field of internet, and in particular, to a method and an apparatus for caching interface data, and a computing device.
Background
It is often the case in internet applications that system performance is limited by the speed of some interface accesses. Because the performance of the read cache is far better than that of the read interface, for some interfaces capable of performing transient cache, interface data can be placed in a cache server (such as a Redis cluster) for caching, so that the number of interface access times is reduced, and the purpose of improving the performance is achieved.
The existing interface data caching scheme generally includes that after a synchronous access interface obtains corresponding interface data, the interface data is returned to a client side, and the interface data is cached to Redis. And when the expiration time is reached, deleting the cached interface data. If the set expiration time is longer, the caching time of the interface data is too long, which may affect the real-time performance of data updating (it is possible that the interface data is updated, but the cached data is not updated); if the set expiration time is short, the cache time of the interface data is too short, the probability of cache hit is reduced, the interface is directly accessed by directly penetrating the cache, the response speed is reduced, and the user experience is further influenced.
Therefore, a solution is needed to ensure real-time data and improve the access speed of the interface.
Disclosure of Invention
In view of the above, the present invention is proposed to provide a method, an apparatus and a computing device for caching interface data that overcome the above problems or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a method for caching interface data, the method being executed in a computing device, the computing device being connected to a cache server, the cache server including a first memory, a second memory and an interface address queue, the first memory being adapted to store an association relationship between an interface address and interface data, the second memory being adapted to store an association relationship between an interface address and a next update time, the method including:
according to a preset time interval, acquiring an interface address of which the next updating time is less than the current time from the second memory, inserting the acquired interface address into the interface address queue, and deleting a data entry corresponding to the acquired interface address from the second memory;
taking out an interface address from the interface address queue and accessing the interface address to obtain corresponding interface data;
and updating the interface data acquired by accessing the interface address into the first memory, setting the next update time of the interface address, and storing the interface address and the next update time into the second memory in a correlated manner.
Optionally, the interface data caching method according to the present invention further includes: receiving an interface request sent by a client, and judging whether interface data corresponding to the requested interface address is stored in a first memory; when the interface data corresponding to the requested interface address is not stored in the first memory, accessing the interface address to acquire the corresponding interface data; and returning the interface data acquired by accessing the interface address to the client, updating the interface data into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlated manner.
Optionally, the interface data caching method according to the present invention further includes: and when the first memory stores the interface data corresponding to the requested interface address, returning the interface data to the client.
Optionally, in the interface data caching method according to the present invention, the cache server employs Redis, and/or the second memory employs an ordered set Sortset.
According to another aspect of the present invention, there is also provided an apparatus for caching interface data, where the apparatus resides in a computing device, the computing device is connected to a cache server, the cache server includes a first memory, a second memory, and an interface address queue, the first memory is adapted to store an association relationship between an interface address and interface data, the second memory is adapted to store an association relationship between an interface address and a next update time, the apparatus includes:
the data synchronization unit is suitable for acquiring an interface address with the next updating time smaller than the current time from the second memory according to a preset time interval, inserting the acquired interface address into the interface address queue, and deleting a data entry corresponding to the acquired interface address from the second memory;
the queue consumption unit is suitable for taking out the interface address from the interface address queue, accessing the interface address to obtain corresponding interface data, updating the interface data obtained by accessing the interface address into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlation manner.
According to yet another aspect of the present invention, there is provided a computing device comprising: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the above-described method.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the above-described method.
According to the interface data caching scheme, the caching time of the interface data can be reduced, and meanwhile, the cache hit probability is greatly improved, so that the real-time performance of the data is guaranteed, meanwhile, the response speed (interface access speed) is improved, and therefore the user experience is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of a cache system 100 for interface data according to an embodiment of the invention;
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a method 300 for caching interface data according to one embodiment of the invention; and
fig. 4 shows a block diagram of a buffer apparatus 400 for interface data according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a schematic diagram of a cache system 100 for interface data according to an embodiment of the present invention. As shown in FIG. 1, the system 100 includes one or more clients 110 (3 shown) and a server 120, the clients 110 communicating with the server 120 via the Internet. The client 110 may be a browser or a page application (webapp), and is resident in a computing device, and is capable of sending an interface request (e.g., an http request) to the server 120, where the interface request includes a requested interface address, and receiving interface data corresponding to the interface address returned by the server 120.
The server 120 is connected to one or more third-party servers 140 on the one hand and to the cache server 130 on the other hand. The third-party server 140 refers to a server corresponding to the interface address requested by the client 110, and the server 120 can access the third-party server 140 corresponding to the interface address, obtain corresponding interface data from the third-party server 140, and return the obtained interface data to the client 110 that initiated the request. In addition, the server 120 can further store the acquired interface data into the cache server 130, where the cache server 130 is, for example, a Redis server or a Redis cluster, and of course, the cache server 130 may also be another type of cache server.
The cache server 130 includes a first memory 132, a second memory 134, and an interface address queue 136. The first memory 132 is adapted to store the association relationship between the interface address and the interface data, the second memory 134 is adapted to store the association relationship between the interface address and the next update time, and the interface address queue 136 is a queue formed by a plurality of interface addresses. After the server 120 obtains the interface data from the third-party server 140, the association relationship between the interface address and the interface data is stored as a data entry in the first memory 132, and may be specifically stored in a key-value manner, where the interface address is a key and the interface data is a value. The second memory 134 may adopt an ordered set Sortset, which includes a plurality of data entries, each data entry being an association of an interface address with a next update time, and sorting the stored data entries at the next update time. Interface address queue 136 may be a first-in-first-out queue.
In order to improve the access speed of the client while ensuring the real-time performance of the data, the embodiment of the present invention further provides a method 300 for caching interface data, where the method 300 is executed by the server 120 provided in the embodiment of the present invention, and the server 120 may be implemented as the computing device 200 described below.
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. The application 222 is actually a plurality of program instructions that direct the processor 204 to perform corresponding operations. In some embodiments, application 222 may be arranged to cause processor 204 to operate with program data 224 on an operating system.
Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
Computing device 200 may be implemented as a personal computer including desktop and notebook computer configurations, as well as a server, such as a file server, database server, application server, WEB server, and the like. In an embodiment in accordance with the invention, the computing device 200 is configured to perform a method 300 of caching interface data in accordance with the invention. The application 222 of the computing device 200 includes a plurality of program instructions that implement the method 300 according to the present invention.
Fig. 3 shows a flow diagram of a method 300 for caching interface data according to an embodiment of the invention, the method 300 being suitable for execution in a computing device, e.g. in the server 120 as shown in fig. 1. As mentioned above, the server 120 is connected to one or more third-party servers 140 on the one hand and to the cache server 130 on the other hand. The cache server 130 includes a first memory adapted to store an association relationship between an interface address and interface data, a second memory adapted to store an association relationship between an interface address and a next update time, and an interface address queue.
Referring to fig. 3, the method 300 begins at step S302. In step S302, the interface address whose next update time is less than the current time is obtained from the second memory at a predetermined time interval, the obtained interface address is inserted into the interface address queue, and the data entry corresponding to the obtained interface address is deleted from the second memory.
As described above, the second memory may adopt a Sortset, where the Sortset stores a plurality of data entries, each data entry is an association relationship between an interface address and next update time, and the data entries are sorted in the order from the smaller update time to the larger update time, so that uniqueness and order of the asynchronous access interface can be ensured, and the interface which needs to be updated most is updated preferentially. The data stored in Sortset is shown in the table below.
Interface address Time for next update (time of coming back to source)
http://xxx.com/api/xxx1 2019-11-04 10:01:00
http://xxx.com/api/xxx2 2019-11-04 10:01:01
http://xxx.com/api/xxx3 2019-11-04 10:01:04
The service of an independent thread can be set in the computing device to execute the step, the interface address with the next updating time smaller than the current time is searched from the sortset at regular time (for example, every 5 seconds), the interface address is inserted into the interface address queue to be updated, and the searched data record is deleted from the sortset. Assuming that the current time is 2019-11-0410: 01:03, the found interface addresses are http:// xxx.com/api/xxx1 and http:// xxx.com/api/xxx 2.
The interface address queue is a first-in first-out queue in which the interface addresses to be updated are stored, as shown in the following table.
Figure BDA0002268653380000071
In step S304, the interface address queue is consumed, that is, the interface address is fetched from the interface address queue in real time, and the interface address is accessed to obtain the corresponding interface data.
Another independent thread service may be provided in the computing device to consume the interface address queue, access a third-party server corresponding to the interface address, and obtain corresponding interface data from the third-party server. The interface address queue is a first-in first-out queue, i.e. the interface addresses are taken out from the interface queue according to the first-in first-out sequence. For example, third party servers corresponding to http:// xxx.com/api/xxx1 and http:// xxx.com/api/xxx2 are fetched and accessed in sequence to obtain corresponding interface data.
In step S306, the interface data obtained by accessing the interface address is updated into the first memory, that is, the interface address and the corresponding interface data are stored as a data entry in the first memory, and if the data entry corresponding to the interface address already exists in the first memory, the existing data entry is also deleted.
Then, the next update time of the interface address is set, and the interface address and the next update time are stored in the second memory in an associated manner. Here, the next update time may be configured to be several minutes to several hours, and may be specifically determined according to a service scenario of the third-party server where the interface data is located. The second memory can be sortset, and when data is inserted into the sortset, if the sortset has data of a corresponding interface address, the data is not inserted; if not, the next time update is made, and the corresponding position in sortset is inserted.
The above is the update policy of the cached interface data. In addition, when the server receives an interface request sent by the client, whether the first memory stores interface data corresponding to the requested interface address is judged; and when the first memory stores the interface data corresponding to the requested interface address, returning the interface data to the client.
When the first memory does not store the interface data corresponding to the requested interface address, it indicates that the interface is not requested, and the client makes a request for the first time, accesses the interface address to obtain the corresponding interface data, and returns the interface data obtained by accessing the interface address to the client. Similarly, after accessing the interface address and acquiring the interface data, the interface data is further updated into the first memory (the updating method is the same as the above step), the next updating time of the interface address is set, and the interface address and the next updating time are stored into the second memory in a correlated manner (the storing method is the same as the above step).
It should be noted that, in the embodiment of the present invention, the execution sequence of the steps S302, S304, and S306 is not limited.
According to the interface data caching scheme provided by the embodiment of the invention, for a certain interface address, only when the client requests the interface address for the first time, the server can directly access the interface address to obtain corresponding interface data, and then all requests for the interface address are directly obtained from the caching server. When the interface data can be asynchronously updated by a single service, the cache is updated, and the updating time of the next update is set at the same time, so that the cache can be ensured to exist all the time; the time of the cache determines the frequency of refreshing; the next update time can be set relatively short, for example, 5 to 10 minutes, which is equivalent to shortening the cache time, and this merely speeds up the asynchronous update cache without any effect on the speed of accessing the interface (the speed at which the client requests interface data). Therefore, the access performance of the interface is improved on the whole, meanwhile, the caching time can be shortened, and the real-time performance of data cannot be influenced.
Fig. 4 shows a block diagram of an interface data caching apparatus 400 according to an embodiment of the present invention, where the apparatus 400 resides in a computing device, for example, in the server 120, and the computing device is connected to a caching server, and the caching server includes a first memory, a second memory, and an interface address queue, where the first memory is adapted to store an association relationship between an interface address and interface data, and the second memory is adapted to store an association relationship between an interface address and a next update time.
Referring to fig. 4, the apparatus 400 includes:
a data synchronization unit 410, adapted to obtain, from the second memory, an interface address whose next update time is less than the current time at a predetermined time interval, insert the obtained interface address into the interface address queue, and delete a data entry corresponding to the obtained interface address from the second memory;
the queue consuming unit 420 is adapted to fetch an interface address from the interface address queue, access the interface address to obtain corresponding interface data, update the interface data obtained by accessing the interface address into the first memory, set a next update time of the interface address, and store the interface address and the next update time in the second memory in an associated manner.
The apparatus 400 may further comprise a request processing unit 430 adapted to:
receiving an interface request sent by a client, and judging whether interface data corresponding to the requested interface address is stored in a first memory;
when the interface data corresponding to the requested interface address is not stored in the first memory, accessing the interface address to acquire the corresponding interface data;
and returning the interface data acquired by accessing the interface address to the client, updating the interface data into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlated manner.
The request processing unit 430 is further adapted to: and when the first memory stores the interface data corresponding to the requested interface address, returning the interface data to the client.
The specific processing performed by the data synchronization unit 410 and the queue consumption unit 420 is the same as the steps S302, S304, and S306, and is not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (10)

1. A method for caching interface data, executed in a computing device connected to a cache server, the cache server including a first memory, a second memory and an interface address queue, the first memory being adapted to store an association relationship between an interface address and interface data, the second memory being adapted to store an association relationship between an interface address and a next update time, the method comprising:
according to a preset time interval, acquiring an interface address of which the next updating time is less than the current time from the second memory, inserting the acquired interface address into the interface address queue, and deleting a data entry corresponding to the acquired interface address from the second memory;
taking out an interface address from the interface address queue and accessing the interface address to obtain corresponding interface data; and
and updating the interface data acquired by accessing the interface address into the first memory, setting the next update time of the interface address, and storing the interface address and the next update time into the second memory in a correlated manner.
2. The method of claim 1, further comprising:
receiving an interface request sent by a client, and judging whether interface data corresponding to the requested interface address is stored in a first memory;
when the interface data corresponding to the requested interface address is not stored in the first memory, accessing the interface address to acquire the corresponding interface data;
and returning the interface data acquired by accessing the interface address to the client, updating the interface data into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlated manner.
3. The method of claim 2, further comprising:
and when the first memory stores the interface data corresponding to the requested interface address, returning the interface data to the client.
4. A method as claimed in any one of claims 1 to 3, wherein the cache server employs Redis and/or the second memory employs an ordered set, Sortset.
5. An apparatus for caching interface data, residing in a computing device, the computing device being connected to a cache server, the cache server comprising a first memory, a second memory and an interface address queue, the first memory being adapted to store an association between an interface address and interface data, the second memory being adapted to store an association between an interface address and a next update time, the apparatus comprising:
the data synchronization unit is suitable for acquiring an interface address with the next updating time smaller than the current time from the second memory according to a preset time interval, inserting the acquired interface address into the interface address queue, and deleting a data entry corresponding to the acquired interface address from the second memory;
the queue consumption unit is suitable for taking out the interface address from the interface address queue, accessing the interface address to obtain corresponding interface data, updating the interface data obtained by accessing the interface address into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlation manner.
6. The apparatus of claim 5, further comprising a request processing unit adapted to:
receiving an interface request sent by a client, and judging whether interface data corresponding to the requested interface address is stored in a first memory;
when the interface data corresponding to the requested interface address is not stored in the first memory, accessing the interface address to acquire the corresponding interface data;
and returning the interface data acquired by accessing the interface address to the client, updating the interface data into the first memory, setting the next updating time of the interface address, and storing the interface address and the next updating time into the second memory in a correlated manner.
7. The apparatus of claim 6, wherein the request processing unit is further adapted to:
and when the first memory stores the interface data corresponding to the requested interface address, returning the interface data to the client.
8. The apparatus according to any one of claims 5 to 7, wherein the cache server employs Redis, and/or the second memory employs an ordered set Sortset.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-4.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-4.
CN201911097008.3A 2019-11-11 2019-11-11 Interface data caching method and device and computing equipment Active CN110889053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911097008.3A CN110889053B (en) 2019-11-11 2019-11-11 Interface data caching method and device and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911097008.3A CN110889053B (en) 2019-11-11 2019-11-11 Interface data caching method and device and computing equipment

Publications (2)

Publication Number Publication Date
CN110889053A true CN110889053A (en) 2020-03-17
CN110889053B CN110889053B (en) 2022-07-19

Family

ID=69747245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911097008.3A Active CN110889053B (en) 2019-11-11 2019-11-11 Interface data caching method and device and computing equipment

Country Status (1)

Country Link
CN (1) CN110889053B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254519A (en) * 2021-05-28 2021-08-13 北京奇岱松科技有限公司 Access method, device, equipment and storage medium of multi-source heterogeneous database

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090125678A1 (en) * 2007-11-09 2009-05-14 Seisuke Tokuda Method for reading data with storage system, data managing system for storage system and storage system
CN103645904A (en) * 2013-12-20 2014-03-19 北京京东尚科信息技术有限公司 Cache realization method of interface calling
CN104657401A (en) * 2014-10-21 2015-05-27 北京齐尔布莱特科技有限公司 Web cache updating method
CN106815329A (en) * 2016-12-29 2017-06-09 网易无尾熊(杭州)科技有限公司 A kind of data cached update method and device
CN106843769A (en) * 2017-01-23 2017-06-13 北京齐尔布莱特科技有限公司 A kind of interface data caching method, device and computing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090125678A1 (en) * 2007-11-09 2009-05-14 Seisuke Tokuda Method for reading data with storage system, data managing system for storage system and storage system
CN103645904A (en) * 2013-12-20 2014-03-19 北京京东尚科信息技术有限公司 Cache realization method of interface calling
CN104657401A (en) * 2014-10-21 2015-05-27 北京齐尔布莱特科技有限公司 Web cache updating method
CN106815329A (en) * 2016-12-29 2017-06-09 网易无尾熊(杭州)科技有限公司 A kind of data cached update method and device
CN106843769A (en) * 2017-01-23 2017-06-13 北京齐尔布莱特科技有限公司 A kind of interface data caching method, device and computing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113254519A (en) * 2021-05-28 2021-08-13 北京奇岱松科技有限公司 Access method, device, equipment and storage medium of multi-source heterogeneous database

Also Published As

Publication number Publication date
CN110889053B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN110275841B (en) Access request processing method and device, computer equipment and storage medium
US9569400B2 (en) RDMA-optimized high-performance distributed cache
US8972690B2 (en) Methods and apparatuses for usage based allocation block size tuning
US8560509B2 (en) Incremental computing for web search
CN109657174B (en) Method and device for updating data
CN109977129A (en) Multi-stage data caching method and equipment
CN108984553B (en) Caching method and device
US10884939B2 (en) Cache pre-fetching using cyclic buffer
CN111651464A (en) Data processing method and system and computing equipment
CN110765036B (en) Method and device for managing metadata at a control device
CN110737399A (en) Method, apparatus and computer program product for managing a storage system
CN113032432A (en) Data caching processing method, system, computing device and readable storage medium
KR20160060550A (en) Page cache device and method for efficient mapping
US11593268B2 (en) Method, electronic device and computer program product for managing cache
CN110764796A (en) Method and device for updating cache
US8429348B2 (en) Method and mechanism for delaying writing updates to a data cache
CN110889053B (en) Interface data caching method and device and computing equipment
CN113094392A (en) Data caching method and device
CN117573574B (en) Prefetching method and device, electronic equipment and readable storage medium
CN113010535A (en) Cache data updating method, device, equipment and storage medium
CN111414383A (en) Data request method, data processing system and computing device
US10073657B2 (en) Data processing apparatus, data processing method, and computer program product, and entry processing apparatus
CN108875036B (en) Page data caching method and device and electronic equipment
CN114924794B (en) Address storage and scheduling method and device for transmission queue of storage component
US10015012B2 (en) Precalculating hashes to support data distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant