CN110737857A - back-end paging acceleration method, system, terminal and storage medium - Google Patents

back-end paging acceleration method, system, terminal and storage medium Download PDF

Info

Publication number
CN110737857A
CN110737857A CN201910859559.2A CN201910859559A CN110737857A CN 110737857 A CN110737857 A CN 110737857A CN 201910859559 A CN201910859559 A CN 201910859559A CN 110737857 A CN110737857 A CN 110737857A
Authority
CN
China
Prior art keywords
data
cache
range
request
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910859559.2A
Other languages
Chinese (zh)
Inventor
靳国锋
张建刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Wave Intelligent Technology Co Ltd
Original Assignee
Suzhou Wave Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Wave Intelligent Technology Co Ltd filed Critical Suzhou Wave Intelligent Technology Co Ltd
Priority to CN201910859559.2A priority Critical patent/CN110737857A/en
Publication of CN110737857A publication Critical patent/CN110737857A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Abstract

The invention provides back-end paging acceleration methods, systems, terminals and storage media, which comprise the steps of setting a cache range, requesting a server to obtain pre-cache data according to an inquiry request and the cache range, caching the pre-cache data to the local, and loading the data requested to be inquired from the locally cached data.

Description

back-end paging acceleration method, system, terminal and storage medium
Technical Field
The invention relates to the technical field of servers, in particular to back-end paging acceleration methods, systems, terminals and storage media.
Background
When a user queries a page table and the data amount of the table is very large, generally adopts a method of back-end (server) paging, for example, when the user queries consumption details, ten thousand of data may exist in the consumption details stored in a database for about 5 years, a browser page can only display 20 data, the user wants to view the data of the first 20 pages and needs pages to view, at this time, paging technology is used, and because the data amount is too large, 1 ten thousand of data of the server cannot be transmitted to a client (the data is large, the traffic is large, and the time is long) only by using the server paging, when the user clicks pages at the client, the request is transmitted to the server, and the server loads the data of the page to the client to display, the server paging has the defect that 2 pages at each time when the user clicks, the request needs to be transmitted to the server, the server queries the data is loaded to the client again after the user clicks 2 pages at the client, the server wastes more than 64 seconds of the whole service bandwidth when the user clicks on the page, and the service is not good.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides methods, systems, terminals and storage media for accelerating a back-end paging, so as to solve the above-mentioned technical problems.
, the invention provides a back-end page acceleration method, comprising:
setting a buffer range;
requesting to obtain pre-cache data from a server according to the query request and the cache range, and caching the pre-cache data to the local;
and loading the data requested to be queried from the data cached locally.
Further , the setting the buffer range includes:
reading the data volume needing to be read in the query request;
setting a buffer range as the lower batches of data of the read data quantity, wherein the data quantity of the lower batches of data is equal to the read data quantity.
Step , the requesting the server to obtain the pre-cached data according to the query request and the cache range includes:
if the local cache data does not meet the query request, requesting the server to acquire the request data and the data in the cache range of the request data;
and if the current locally cached data is less than the caching range of the current query request, requesting the server side to acquire the pre-cached data.
In a second aspect, the present invention provides back-end paging acceleration systems, comprising:
the range setting unit is configured to set a cache range;
the data acquisition unit is configured to request the server to acquire pre-cached data according to the query request and the cache range, and cache the pre-cached data to the local;
and the data loading unit is configured to load the data requested to be queried from the data cached locally.
Further to , the range setting unit includes:
the request reading module is configured to read the data volume required to be read in the query request;
and the range setting module is used for setting the buffer range as the lower batches of data of the read data volume, and the data volume of the lower batches of data is equal to the read data volume.
Further , the data obtaining unit includes:
the first acquisition module is configured to request the server to acquire the request data and the data in the request data cache range if the local cache data does not meet the query request;
and the cache supplement module is configured to request the server side to acquire the pre-cached data if the current locally cached data is less than the cache range of the current query request.
In a third aspect, terminals are provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, there are computer storage media having instructions stored thereon that, when executed on a computer, cause the computer to perform the method of the above aspects.
The beneficial effect of the invention is that,
according to the back-end paging acceleration method, the back-end paging acceleration system, the back-end paging acceleration terminal and the storage medium, a module combined with client-side cache paging is added on the basis of server-side paging, when a user requests data for times, besides loading data required by the user, parts of more data are loaded to the client side and are placed into the client-side cache, and then the user requests the data to obtain the data from the client-side cache, so that the quantity of the requested data to the server side is reduced, and the purposes of improving the resource utilization rate and the client-side response capacity are achieved.
In addition, the invention has reliable design principle, simple structure and very wide application prospect of .
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of embodiments of the invention.
Fig. 2 is a schematic block diagram of a system of embodiments of the invention.
Fig. 3 is a schematic structural diagram of terminals according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a partial embodiment of of the present invention, rather than a whole embodiment.
FIG. 1 is a schematic flow diagram of a method of embodiments of the present invention, wherein the execution subject of FIG. 1 can be back-end paging acceleration systems.
As shown in fig. 1, the method 100 includes:
step 110, setting a cache range;
step 120, requesting a server to acquire pre-cached data according to the query request and the cache range, and caching the pre-cached data to the local;
step 130, load the data requested to be queried from the data cached locally.
Optionally, as embodiments of the present invention, the setting of the cache range includes:
reading the data volume needing to be read in the query request;
setting a buffer range as the lower batches of data of the read data quantity, wherein the data quantity of the lower batches of data is equal to the read data quantity.
Optionally, as embodiments of the present invention, the requesting, from the server, to obtain the pre-cached data according to the query request and the cache range includes:
if the local cache data does not meet the query request, requesting the server to acquire the request data and the data in the cache range of the request data;
and if the current locally cached data is less than the caching range of the current query request, requesting the server side to acquire the pre-cached data.
For the convenience of understanding of the present invention, the following describes a method for accelerating a back-end page according to the principles of the method for accelerating a back-end page according to the present invention in conjunction with the process of accelerating a back-end page in an embodiment.
Specifically, the back-end paging acceleration method includes:
and S1, setting a buffer range.
And reading the data volume needing to be read in the query request, and setting the data volume as a cache range. For example, the 60 th to 80 th pieces of data are requested, the next time, the 80 th to 100 th pieces of data are requested, and the 100 th pieces of data in the buffer are buffered.
S2, requesting the server side to acquire the pre-cached data according to the query request and the cache range, and caching the pre-cached data to the local.
is that the local cache data does not satisfy the current query request, for example, when the user queries the page and loads the th page data (for example, 20 data per page), it first judges whether the query request is th query, if it is th query request, the module cache is cleared, then sends the request for obtaining th page 20 data to the server, after the server returns data, it updates the local cache data and loads th page data, then automatically sends the request for querying 100 data to the server, and after the server returns data, it puts the returned data into cache.
For example, when the user clicks pages (namely, an inquiry request is generated), whether the cache is about to be short of the data requested next time is judged (for example, when the user requests 60-80 th data, the next time is 80-100 data, namely 100 data in the cache), then the data amount insufficient for the next inquiry request in the cache is calculated, the second data acquisition request is sent to the server again, the server returns corresponding data, the returned data is put into the cache, so that the cache has enough data, and when the user clicks pages again, the data is acquired from the cache of the module.
And S3, loading the data requested to be inquired from the data cached locally.
And calling the data requested to be inquired from the locally cached data, and loading the called data.
As shown in fig. 2, the system 200 includes:
a range setting unit 210 configured to set a buffer range;
the data obtaining unit 220 is configured to request the server to obtain pre-cached data according to the query request and the cache range, and cache the pre-cached data locally;
the data loading unit 230 is configured to load the data requested to be queried from the locally cached data.
Optionally, as embodiments of the present invention, the range setting unit includes:
the request reading module is configured to read the data volume required to be read in the query request;
and the range setting module is used for setting the buffer range as the lower batches of data of the read data volume, and the data volume of the lower batches of data is equal to the read data volume.
Optionally, as embodiments of the present invention, the data obtaining unit includes:
the first acquisition module is configured to request the server to acquire the request data and the data in the request data cache range if the local cache data does not meet the query request;
and the cache supplement module is configured to request the server side to acquire the pre-cached data if the current locally cached data is less than the cache range of the current query request.
Fig. 3 is a schematic structural diagram of terminal systems 300 according to an embodiment of the present invention, where the terminal system 300 may be used to execute the back-end paging acceleration method according to the embodiment of the present invention.
The terminal system 300 may include a processor 310, a memory 320, and a communication unit 330, which communicate via buses, and it will be understood by those skilled in the art that the structure of the server shown in the figure is not a limitation of the present invention, and may be a bus structure, a star structure, a combination of more or less components than those shown, or a different arrangement of components.
The memory 320 may be used for storing instructions executed by the processor 310, and the memory 320 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in memory 320, when executed by processor 310, enable terminal 300 to perform some or all of the steps in the method embodiments described below.
The processor 310 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 310 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 330, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention further provides computer storage media, wherein the computer storage media can store programs, and the programs can include some or all of the steps in the embodiments of the present invention when executed.
Therefore, by adding a module combined with client cache paging on the basis of server paging, when a user requests data for th time, besides loading data required by the user, parts of more data are loaded to the client and put into the client cache, and then the user requests the data to obtain the data from the client cache, so that the quantity of the data requested by the server is reduced, and the capacity of improving the resource utilization rate and the client response is achieved.
Based on the understanding that the technical solutions in the embodiments of the present invention or portions thereof contributing to the prior art can be embodied in the form of software products stored in storage media such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes, which include instructions for causing computer terminals (which may be personal computers, servers, or secondary terminals, network terminals, etc.) to execute all or part of the steps of the methods described in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
For example, the above-described system embodiments are merely illustrative, e.g., the division of the units into logical functional divisions, and other divisions may be possible in actual practice, e.g., multiple units or components may be combined or integrated into another systems, or features may be omitted or not implemented.at , the shown or discussed coupling or direct coupling or communication connection between each other may be through interfaces, and the indirect coupling or communication connection of the systems or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in places, or may also be distributed on multiple network units.
In addition, functional units in the embodiments of the present invention may be integrated into processing units, or each unit may exist alone physically, or two or more units are integrated into units.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1, back-end paging acceleration method, comprising:
setting a buffer range;
requesting to obtain pre-cache data from a server according to the query request and the cache range, and caching the pre-cache data to the local;
and loading the data requested to be queried from the data cached locally.
2. The method for back-end page acceleration as claimed in claim 1, wherein the setting the buffer range comprises:
reading the data volume needing to be read in the query request;
setting a buffer range as the lower batches of data of the read data quantity, wherein the data quantity of the lower batches of data is equal to the read data quantity.
3. The method for back-end paging acceleration according to claim 1, wherein the requesting a server for pre-cached data according to a query request and a cache range comprises:
if the local cache data does not meet the query request, requesting the server to acquire the request data and the data in the cache range of the request data;
and if the current locally cached data is less than the caching range of the current query request, requesting the server side to acquire the pre-cached data.
The back-end paging acceleration system, wherein the setting of the cache range includes:
the range setting unit is configured to set a cache range;
the data acquisition unit is configured to request the server to acquire pre-cached data according to the query request and the cache range, and cache the pre-cached data to the local;
and the data loading unit is configured to load the data requested to be queried from the data cached locally.
5. The back-end paging acceleration system according to claim 4, wherein the range setting unit includes:
the request reading module is configured to read the data volume required to be read in the query request;
and the range setting module is used for setting the buffer range as the lower batches of data of the read data volume, and the data volume of the lower batches of data is equal to the read data volume.
6. The back-end paging acceleration system of claim 4, wherein the data fetch unit comprises:
the first acquisition module is configured to request the server to acquire the request data and the data in the request data cache range if the local cache data does not meet the query request;
and the cache supplement module is configured to request the server side to acquire the pre-cached data if the current locally cached data is less than the cache range of the current query request.
A terminal of the type 7, , comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any of claims 1-3.
A computer-readable storage medium , having stored thereon a computer program, characterized in that the program, when being executed by a processor, is adapted to carry out the method according to any of claims 1-3 .
CN201910859559.2A 2019-09-11 2019-09-11 back-end paging acceleration method, system, terminal and storage medium Withdrawn CN110737857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910859559.2A CN110737857A (en) 2019-09-11 2019-09-11 back-end paging acceleration method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910859559.2A CN110737857A (en) 2019-09-11 2019-09-11 back-end paging acceleration method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN110737857A true CN110737857A (en) 2020-01-31

Family

ID=69267614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910859559.2A Withdrawn CN110737857A (en) 2019-09-11 2019-09-11 back-end paging acceleration method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110737857A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367952A (en) * 2020-03-02 2020-07-03 中国邮政储蓄银行股份有限公司 Paging query method and system for cache data and computer readable storage medium
CN111488370A (en) * 2020-04-02 2020-08-04 杭州迪普科技股份有限公司 List paging quick response system and method
CN111858581A (en) * 2020-06-08 2020-10-30 远光软件股份有限公司 Page query method and device, storage medium and electronic equipment
CN112069207A (en) * 2020-08-27 2020-12-11 重庆攸亮科技股份有限公司 Multi-table combined query efficiency improving method
CN112272204A (en) * 2020-09-18 2021-01-26 苏州浪潮智能科技有限公司 Method, system, terminal and storage medium for automatically logging out web page overtime
CN112311843A (en) * 2020-03-18 2021-02-02 北京沃东天骏信息技术有限公司 Data loading method and device
CN113986439A (en) * 2021-11-01 2022-01-28 挂号网(杭州)科技有限公司 Data display method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367952A (en) * 2020-03-02 2020-07-03 中国邮政储蓄银行股份有限公司 Paging query method and system for cache data and computer readable storage medium
CN111367952B (en) * 2020-03-02 2023-08-25 中国邮政储蓄银行股份有限公司 Paging query method, system and computer readable storage medium for cache data
CN112311843A (en) * 2020-03-18 2021-02-02 北京沃东天骏信息技术有限公司 Data loading method and device
CN111488370A (en) * 2020-04-02 2020-08-04 杭州迪普科技股份有限公司 List paging quick response system and method
CN111488370B (en) * 2020-04-02 2023-09-12 杭州迪普科技股份有限公司 List paging quick response system and method
CN111858581A (en) * 2020-06-08 2020-10-30 远光软件股份有限公司 Page query method and device, storage medium and electronic equipment
CN112069207A (en) * 2020-08-27 2020-12-11 重庆攸亮科技股份有限公司 Multi-table combined query efficiency improving method
CN112069207B (en) * 2020-08-27 2023-10-03 重庆攸亮科技股份有限公司 Multi-table joint query efficiency improving method
CN112272204A (en) * 2020-09-18 2021-01-26 苏州浪潮智能科技有限公司 Method, system, terminal and storage medium for automatically logging out web page overtime
CN112272204B (en) * 2020-09-18 2022-06-21 苏州浪潮智能科技有限公司 Method, system, terminal and storage medium for automatically logging out web page overtime
CN113986439A (en) * 2021-11-01 2022-01-28 挂号网(杭州)科技有限公司 Data display method and device

Similar Documents

Publication Publication Date Title
CN110737857A (en) back-end paging acceleration method, system, terminal and storage medium
CN109684358B (en) Data query method and device
US9552326B2 (en) Cache system and cache service providing method using network switch
CN106933871B (en) Short link processing method and device and short link server
CN110401711B (en) Data processing method, device, system and storage medium
CN103607424B (en) Server connection method and server system
CN107391664A (en) Page data processing method and system based on WEB
US20170153909A1 (en) Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine
CN106202082B (en) Method and device for assembling basic data cache
CN104346345A (en) Data storage method and device
CN110489696A (en) Buffering updating method, device and electronic equipment, storage medium
CN111597213A (en) Caching method, software server and storage medium
CN111930305A (en) Data storage method and device, storage medium and electronic device
CN109361778A (en) A kind of method and terminal managing session
CN109948332A (en) A kind of physical machine login password remapping method and device
CN113438302A (en) Dynamic resource multi-level caching method, system, computer equipment and storage medium
CN109962941B (en) Communication method, device and server
CN107277088B (en) High-concurrency service request processing system and method
CN112463399A (en) Server BMC information management method, system, terminal and storage medium
CN112463748A (en) Storage system file lock identification method, system, terminal and storage medium
CN112491939A (en) Multimedia resource scheduling method and system
CN112688980B (en) Resource distribution method and device, and computer equipment
CN113094391B (en) Calculation method, device and equipment for data summarization supporting cache
CN117057799B (en) Asset data processing method, device, equipment and storage medium
CN103327048A (en) Online-application data matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200131

WW01 Invention patent application withdrawn after publication