CN114281855A - Data request method, data request device, computer equipment, storage medium and program product - Google Patents

Data request method, data request device, computer equipment, storage medium and program product Download PDF

Info

Publication number
CN114281855A
CN114281855A CN202111609540.6A CN202111609540A CN114281855A CN 114281855 A CN114281855 A CN 114281855A CN 202111609540 A CN202111609540 A CN 202111609540A CN 114281855 A CN114281855 A CN 114281855A
Authority
CN
China
Prior art keywords
data
cache database
target data
level cache
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111609540.6A
Other languages
Chinese (zh)
Inventor
林克全
王博
夏伟
张文瀚
王怀
张超
赵飞
陈诚
邓兵
李方宇
杨毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Digital Grid Technology Guangdong Co ltd
Original Assignee
China Southern Power Grid Co Ltd
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Co Ltd, Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical China Southern Power Grid Co Ltd
Priority to CN202111609540.6A priority Critical patent/CN114281855A/en
Publication of CN114281855A publication Critical patent/CN114281855A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a data request method, a data request device, a computer device, a storage medium and a computer program product, and relates to the technical field of data processing. The method comprises the steps of acquiring a data identifier of data to be requested in the process of data request; searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification; and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data. The method can remove manual pretreatment of manual slices, greatly reduce operation and maintenance problems and storage requirements, simultaneously store the generated slices, realize that hot spot data is always acquired from the cache, and greatly improve the corresponding speed and concurrency capability of the interface.

Description

Data request method, data request device, computer equipment, storage medium and program product
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data request method, an apparatus, a computer device, a storage medium, and a computer program product.
Background
The segmentation of huge data volumes to obtain data slices is a relatively basic and common operation in the field of big data technology for data mining and data analysis.
In the prior art, a slicing algorithm is generally called to obtain a large number of data slices in a slicing data requesting process, then the data slices are stored in a static resource mode according to a certain folder organization structure, and then a request is analyzed to a front-end rendering display.
However, this method requires a lot of time to preprocess slice data, increases the workload, and puts high demands on the storage space.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data requesting method, apparatus, computer device, storage medium, and computer program product capable of improving slice response efficiency in response to the above technical problems.
In a first aspect, the present application provides a data request method, including:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the method further comprises:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, storing the target data into the first-level cache database and the second-level cache database includes:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, storing the target data into the first-level cache database and the second-level cache database includes:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the data screening of the primary cache database when the storage capacity of the primary cache database reaches the upper limit of the capacity includes:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the method further comprises:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In a second aspect, the present application further provides a data request apparatus, including:
the acquisition module is used for acquiring a data identifier of data to be requested;
the searching module is used for searching target data from the first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and the slicing module is used for carrying out data slicing on the global data according to the data identification to obtain target data if the target data does not exist in the secondary cache database, and sending the target data to the terminal so that the terminal can display the target data.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In a fourth aspect, the present application further provides a computer-readable storage medium. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In a fifth aspect, the present application further provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
The data request method, the data request device, the computer equipment, the storage medium and the computer program product can improve the data request efficiency. The method comprises the steps of acquiring a data identifier of data to be requested in the process of data request; searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification; and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data. The method can remove manual pretreatment of manual slices, greatly reduce operation and maintenance problems and storage requirements, simultaneously store the generated slices, realize that hot spot data is always acquired from the cache, and greatly improve the corresponding speed and concurrency capability of the interface.
Drawings
FIG. 1 is a flow diagram illustrating a method for requesting data in one embodiment;
FIG. 2 is a flow diagram illustrating steps for storing target data in a primary cache database and a secondary cache database in one embodiment;
FIG. 3 is a flow chart illustrating a data request method according to another embodiment;
FIG. 4 is a block diagram of a data request device in one embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The segmentation of huge data volumes to obtain data slices is a relatively basic and common operation in the field of big data technology for data mining and data analysis.
Common slicing methods include a grid slicing method and a vector slicing method, wherein the generation of vector slices is divided into a manual slicing mode and an automatic slicing mode, the manual slicing mode refers to that slices are pre-generated before a calling party uses the slices, a request is analyzed to a front-end rendering display in a static resource mode according to a certain folder organizational structure, although the manual slicing mode is quick and convenient to use, a large amount of time is needed for preprocessing slice data, including stock slices and incremental slices, and great manual workload and storage space are brought; the automatic slicing is that when a client requests sliced data, slicing processing is carried out, and meanwhile, the problem brought about is the problem of interface performance, long-time slicing processing time for a single request is too long, so that the experience of a user is greatly reduced, and meanwhile, under the condition of high concurrency, the high concurrency performance of the framework is also limited by the pressure of a rapidly-expanded server.
The data request method provided by the application integrates the advantages of manual slicing and automatic slicing modes, manual preprocessing of manual slicing is removed by adopting a two-dimensional power grid dynamic vector slicing method based on a hot spot caching mechanism, the operation and maintenance problems and the storage requirements are greatly reduced, and meanwhile, a redis hot spot data caching mechanism and a fastdfs distributed caching technology are adopted to store generated slices; and an automatic slicing mechanism is adopted to dynamically cut the slice data requested for the first time and cache or cache update is carried out, so that the hot spot data is always acquired from the cache, and the corresponding speed and the concurrency capability of the interface are greatly improved.
In an embodiment, as shown in fig. 1, a data request method is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 101, obtaining a data identifier of data to be requested.
In the embodiment of the application, a user can input a data request instruction through an interactive interface, and a terminal obtains a data identifier of data to be requested by analyzing the data request instruction.
Optionally, the data identifier of the data to be requested is a unique identifier of the data to be requested, and may be, for example, a number, a code, or the like.
102, searching target data from a first-level cache database according to the data identification; and if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identifier.
In the embodiment of the application, sliced data are stored in both the first-level cache database and the second-level cache database, and when a data request is made, the sliced data are searched in the first-level cache database and the second-level cache database. Optionally, in this embodiment of the present application, the first-level cache database may run in a first server, and the second-level cache database may run in a plurality of distributed second servers, where the first-level cache database performs data maintenance by using a redis hot spot data cache mechanism. And the secondary cache database maintains data by adopting a fastdfs distributed cache technology.
The data stored in the first-level cache database is hot spot data, and the data stored in the second-level cache database is all sliced data. The data stored in the first-level cache database must exist in the second-level cache database, but the data stored in the second-level cache database is not necessarily in the first-level cache database.
In the embodiment of the application, target data is searched in a first-level cache database, and the target data is data corresponding to a data identifier. And if the target data is found in the first-level cache database, sending the target data to the terminal so that the terminal can display the target data. And if the target data is not searched in the first-level cache database, searching in the second-level cache database.
And if the target data is found in the secondary cache database, sending the target data to the terminal so that the terminal can display the target data. And if the target data is not found in the secondary cache database, the target data corresponding to the data identifier is not sliced.
And 103, if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In the embodiment of the application, when the target data corresponding to the data identifier is not sliced, the first-level cache database and the second-level cache database do not store the target data, so that the data is sliced according to the data identifier to obtain the target data. In the embodiment of the present application, the global data may refer to data in a GIS (Geographic Information System). But may also refer to any raw data. The method of slicing the global data may be, for example, a vector slicing method or a grid slicing method.
The data request method comprises the steps of acquiring a data identifier of data to be requested in the process of data request; searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification; and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data. The method can remove manual pretreatment of manual slices, greatly reduce operation and maintenance problems and storage requirements, simultaneously store the generated slices, realize that hot spot data is always acquired from the cache, and greatly improve the corresponding speed and concurrency capability of the interface.
In an embodiment of the present application, in order to improve the first access efficiency, global data or data of interest may be pre-sliced, and then the pre-sliced result is stored in the first-level cache database and the second-level cache database, so that the first request efficiency of data may be improved by pre-slicing data.
In an embodiment of the present application, if the target data is obtained by data slicing of the global data, the target data is stored in the first-level cache database and the second-level cache database.
If the target data is obtained by data slicing of the global data, it indicates that the target data does not exist in the first-level cache database and the second-level cache database at the current moment, and in this case, the target data may be stored in the first-level cache database and the second-level cache database.
If the target data is searched from the first-level cache database or the second-level cache database, the target data is indicated to be already present in the first-level cache database or the second-level cache database, and therefore the target data does not need to be stored again.
The embodiment of the application ensures that the cache service is balanced in performance and memory consumption through two layers of caches.
Optionally, the step of storing the target data into the first-level cache database and the second-level cache database may be to store the target data into the first-level cache database, and when the first-level cache database is full of data, the redundant data in the first-level cache database may be stored into the second-level cache database according to a time sequence, and then the current target data is stored into the first-level cache database. The data slice removed from the first-level cache database is stored into the second-level cache database, and the first-level cache database stores data with short slicing time.
Optionally, in this embodiment of the application, storing the target data in the first-level cache database and the second-level cache database may refer to storing the target data in the first-level cache database and the second-level cache database at the same time without any sequence, that is, storing the target data in both the first-level cache database and the second-level cache database.
It should be noted that the first-level cache database is at the upper limit of the capacity, and the second-level cache database can theoretically make the storage space infinite by adding a hard disk or the like. The second-level cache database is stored on a disk by using fastdfs, so that the persistence of the cache is ensured, and meanwhile, the non-hotspot cache is stored by using a lower-cost hard disk space, and the purpose of reducing the cost is realized.
Optionally, in this embodiment of the application, the target data is stored in the second-level cache database, and then the target data is stored in the first-level cache database.
All read operations need to access the first-level cache database first and then the second-level cache database; all write operations require operating the secondary cache database first and then the primary cache database.
Optionally, in an embodiment, as shown in fig. 2, storing the target data into the first-level cache database and the second-level cache database includes:
step 201, detecting whether the storage capacity of the primary cache database has reached the upper limit of the capacity.
In the embodiment of the application, before storing the target data in the first-level cache database, whether the storage capacity of the first-level cache database reaches the upper limit of the capacity may be detected, and if the storage capacity reaches the upper limit of the capacity, it indicates that the first-level cache database is full.
And 202, screening the data of the primary cache database under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity. And storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In the embodiment of the application, under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, data screening needs to be performed on the first-level cache database, and the screened data is removed from the first-level cache database, so that the first-level cache database has a new storage space for storing target data.
Optionally, in this embodiment of the present application, the process of performing data screening on the first-level cache database may further include the following steps: the prior data may be deleted from the level one cache database in chronological order. Or for the data in the first-level cache database, counting the data heat, sorting according to the data heat, deleting the data sorted at the tail from the first-level cache database, and then storing the target data into the first-level cache database.
The method for counting the heat of the data may be, for example: and determining the data heat degree by counting the access times of the data within the preset time length.
Optionally, in this embodiment of the present application, the process of performing data screening on the first-level cache database may further include the following steps: and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the primary cache database based on an LRU (Least Recently Used Chinese) algorithm.
The LRU elimination mechanism refers to traversing a linked list firstly when a user accesses one data; when traversing to the node of the corresponding data, deleting the node from the original position, and then inserting the node into the head of the linked list. If there is no data not previously cached, the node is inserted directly into the data header. If the cache is full, how much data to insert is deleted from the tail.
In the embodiment of the application, target data are stored in the memory by utilizing redis, and the non-hot cache is eliminated by an LRU mechanism, so that the memory is fully utilized.
On the basis of the foregoing embodiments, an embodiment of the present application further provides a data request method, as shown in fig. 3, the method includes the following steps:
step 301, acquiring an update message when the global data is updated.
In the embodiment of the application, the global data is often updated, and after the global data is updated, an update message can be generated according to the updated data, wherein the update message includes data content and data identification of the updated data.
Step 302, traversing the primary cache database and the secondary cache database according to the update message.
In the embodiment of the application, the update message includes the data content and the data identifier of the data that is updated, corresponding data can be found in the first-level cache database and the second-level cache database according to the data identifier, and then the data content before updating is replaced with the data content of the updated data.
In the embodiment of the application, when updating, the first-level cache database may be traversed first, and then the second-level cache database may be traversed.
In an optional implementation manner, when global data is updated, it may be checked whether updated data exists in the second-level cache database first, and if not, it is checked whether updated data exists in the first-level cache database, and if so, the data in the second-level cache database is updated based on an update message. And checking whether the first-level cache database has updated data, and if not, ending the updating. And if so, updating the data in the primary cache database based on the updating message, and then finishing the updating.
In the embodiment of the application, the slice data in the first-level cache database and the second-level cache database are updated, so that the slice data are always the latest data, and the inconsistency of the data is avoided.
The data request method provided by the embodiment of the application uses multiple technical schemes such as an LRU elimination mechanism, a redis primary cache and a fastdfs secondary cache, and controls pre-cutting and updating of data in a slicing process, so that response efficiency and request concurrency of slicing requests are greatly improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a data request apparatus for implementing the above-mentioned data request method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the data request device provided below can be referred to the limitations of the data request method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 4, there is provided a data request apparatus including: an obtaining module 401, a searching module 402 and a slicing module 403, wherein:
an obtaining module 401, configured to obtain a data identifier of data to be requested;
a searching module 402, configured to search target data from the primary cache database according to the data identifier; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and a slicing module 403, configured to, if target data does not exist in the secondary cache database, perform data slicing on the global data according to the data identifier to obtain target data, and send the target data to the terminal, so that the terminal displays the target data.
In one embodiment, the slicing module 403 is specifically configured to store the target data into the first-level cache database and the second-level cache database if the target data is obtained by data slicing of the global data.
In one embodiment, the slicing module 403 is specifically configured to store the target data into the primary cache database after storing the target data into the secondary cache database.
In one embodiment, the slicing module 403 is specifically configured to detect whether the storage capacity of the primary cache database has reached an upper capacity limit;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the slicing module 403 is specifically configured to perform data screening on the primary cache database based on an LRU algorithm when the storage capacity of the primary cache database reaches the upper limit of the storage capacity.
In one embodiment, the obtaining module 401 is specifically configured to obtain an update message when the global data is updated, where the update message includes data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
The respective modules in the data request device described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a data request method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the computer program when executed by the processor further performs the steps of:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of: and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of: detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the computer program when executed by the processor further performs the steps of: and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the computer program when executed by the processor further performs the steps of: under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of data request, the method comprising:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from a second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on global data according to the data identification to obtain the target data, and sending the target data to a terminal so that the terminal can display the target data.
2. The method of claim 1, further comprising:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
3. The method of claim 2, wherein storing the target data into the primary cache database and the secondary cache database comprises:
and after the target data is stored in the secondary cache database, storing the target data in the primary cache database.
4. The method of claim 2, wherein storing the target data into the primary cache database and the secondary cache database comprises:
detecting whether the storage capacity of the primary cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, data screening is carried out on the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after data removal.
5. The method of claim 4, wherein the performing data screening on the primary cache database in the case that the storage capacity of the primary cache database reaches an upper limit of the capacity comprises:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
6. The method of claim 1, further comprising:
under the condition that the global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
7. A data requesting device, the device comprising:
the acquisition module is used for acquiring a data identifier of data to be requested;
the searching module is used for searching target data from the first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from a second-level cache database according to the data identification;
and the slicing module is used for slicing global data according to the data identification to obtain the target data if the target data does not exist in the secondary cache database, and sending the target data to a terminal so that the terminal can display the target data.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202111609540.6A 2021-12-27 2021-12-27 Data request method, data request device, computer equipment, storage medium and program product Pending CN114281855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111609540.6A CN114281855A (en) 2021-12-27 2021-12-27 Data request method, data request device, computer equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111609540.6A CN114281855A (en) 2021-12-27 2021-12-27 Data request method, data request device, computer equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN114281855A true CN114281855A (en) 2022-04-05

Family

ID=80875899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111609540.6A Pending CN114281855A (en) 2021-12-27 2021-12-27 Data request method, data request device, computer equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114281855A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033608A (en) * 2022-08-12 2022-09-09 广东采日能源科技有限公司 Energy storage system information grading processing method and system
CN117785949A (en) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033608A (en) * 2022-08-12 2022-09-09 广东采日能源科技有限公司 Energy storage system information grading processing method and system
CN115033608B (en) * 2022-08-12 2022-11-04 广东采日能源科技有限公司 Energy storage system information grading processing method and system
CN117785949A (en) * 2024-02-28 2024-03-29 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device
CN117785949B (en) * 2024-02-28 2024-05-10 云南省地矿测绘院有限公司 Data caching method, electronic equipment, storage medium and device

Similar Documents

Publication Publication Date Title
US10372723B2 (en) Efficient query processing using histograms in a columnar database
US9740706B2 (en) Management of intermediate data spills during the shuffle phase of a map-reduce job
KR101806055B1 (en) Generating a multi-column index for relational databases by interleaving data bits for selectivity
US20150169655A1 (en) Efficient query processing in columnar databases using bloom filters
US11429630B2 (en) Tiered storage for data processing
US8984226B2 (en) Load balancing based upon data usage
CN114281855A (en) Data request method, data request device, computer equipment, storage medium and program product
CN104881466A (en) Method and device for processing data fragments and deleting garbage files
US10417192B2 (en) File classification in a distributed file system
CN106155934A (en) Based on the caching method repeating data under a kind of cloud environment
CN111831691B (en) Data reading and writing method and device, electronic equipment and storage medium
CN110245129B (en) Distributed global data deduplication method and device
US9690886B1 (en) System and method for a simulation of a block storage system on an object storage system
US20170235755A1 (en) Replication of data in a distributed file system using an arbiter
CN110334073A (en) A kind of metadata forecasting method, device, terminal, server and storage medium
US20170300262A1 (en) Logical address space for storage resource pools
CN113515518A (en) Data storage method and device, computer equipment and storage medium
Arzubov et al. Concept of Server-side Clusterization of Semi-Static Big Geodata for Web Maps
CN115576947A (en) Data management method and device, combined library, electronic equipment and storage medium
Godavari et al. File Semantic Aware Primary Storage Deduplication System
CN114647630A (en) File synchronization method, information generation method, file synchronization device, information generation device, computer equipment and storage medium
CN116680276A (en) Data tag storage management method, device, equipment and storage medium
WO2023097270A1 (en) Detecting idle periods at network endpoints for management actions at processing clusters for managed databases
CN117648336A (en) Data query method, device, computer equipment and storage medium
CN115374114A (en) Data maintenance method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 510000 No. 11 Kexiang Road, Science City, Luogang District, Guangzhou City, Guangdong Province

Applicant after: CHINA SOUTHERN POWER GRID Co.,Ltd.

Applicant after: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

Address before: 510000 No. 11 Kexiang Road, Science City, Luogang District, Guangzhou City, Guangdong Province

Applicant before: CHINA SOUTHERN POWER GRID Co.,Ltd.

Country or region before: China

Applicant before: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20240314

Address after: Full Floor 14, Unit 3, Building 2, No. 11, Middle Spectra Road, Huangpu District, Guangzhou, Guangdong 510700

Applicant after: China Southern Power Grid Digital Grid Technology (Guangdong) Co.,Ltd.

Country or region after: China

Address before: 510000 No. 11 Kexiang Road, Science City, Luogang District, Guangzhou City, Guangdong Province

Applicant before: CHINA SOUTHERN POWER GRID Co.,Ltd.

Country or region before: China

Applicant before: Southern Power Grid Digital Grid Research Institute Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right