CN109240946A - The multi-level buffer method and terminal device of data - Google Patents
The multi-level buffer method and terminal device of data Download PDFInfo
- Publication number
- CN109240946A CN109240946A CN201811038785.6A CN201811038785A CN109240946A CN 109240946 A CN109240946 A CN 109240946A CN 201811038785 A CN201811038785 A CN 201811038785A CN 109240946 A CN109240946 A CN 109240946A
- Authority
- CN
- China
- Prior art keywords
- data
- caching
- target data
- storage medium
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003860 storage Methods 0.000 claims abstract description 169
- 230000015654 memory Effects 0.000 claims abstract description 36
- 238000004590 computer program Methods 0.000 claims description 14
- 230000004044 response Effects 0.000 abstract description 14
- 238000012545 processing Methods 0.000 abstract description 9
- 238000013500 data storage Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 238000013481 data capture Methods 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000000151 deposition Methods 0.000 description 2
- 238000013467 fragmentation Methods 0.000 description 2
- 238000006062 fragmentation reaction Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000000352 storage cell Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
- G06F12/0882—Page mode
Abstract
The present invention is suitable for technical field of data processing, provide the multi-level buffer method and terminal device of data, it include: by obtaining the target data for needing to cache and its data information, the storage level coefficient of the target data is calculated according to data information, and the storage medium of storage target data is determined to according to the storage level coefficient, secondly, according to the amount of storage of different memory areas block in the data volume of the target data and storage medium, which is stored into caching block corresponding in the storage medium.In this way, it ensure that the target data of different data informations can store in the caching block into suitable storage medium, under the premise of efficiently using the caching block of storage medium, the data response speed in the efficiency of data storage and release, and caching is improved.
Description
Technical field
The invention belongs to technical field of data processing more particularly to the multi-level buffer methods and terminal device of data.
Background technique
Application program or webpage can be generated more and more in use using data, cause adjusting using data
There is very big concurrency during, and then influences the response speed of back end interface.In the prior art by rear end journey
Increase spatial cache in sequence or memory space, the application data store that program is generated at runtime is come into spatial cache
The interface pressure alleviating data storage and calling.
But in the case where interface high concurrent access data, the data response speed that still will appear in caching is reduced,
And background server is caused to there are problems that very big data call pressure.
Summary of the invention
In view of this, the embodiment of the invention provides the multi-level buffer method and terminal device of data, to solve existing skill
Data response speed in art in caching reduces, and background server is caused to there are problems that very big data call pressure.
The first aspect of the embodiment of the present invention provides a kind of multi-level buffer method of data, comprising:
Obtain the target data for needing to cache and its data information;It include the number of the target data in the data information
According to amount;
The caching equivalent coefficient of the target data is calculated according to the data information, and according to the caching equivalent coefficient
Determine the storage medium for storing the target data;The caching block of different amount of storage is preset in the storage medium;
It, will be described according to the amount of storage of block and the data volume of the target data is cached in the storage medium
Target data is stored in corresponding caching block in the storage medium.
The second aspect of the embodiment of the present invention provides a kind of terminal device, including memory, processor and is stored in
In the memory and the computer program that can run on the processor, when the processor executes the computer program
It performs the steps of
Obtain the target data for needing to cache and its data information;It include the number of the target data in the data information
According to amount;
The caching equivalent coefficient of the target data is calculated according to the data information, and according to the caching equivalent coefficient
Determine the storage medium for storing the target data;The caching block of different amount of storage is preset in the storage medium;
It, will be described according to the amount of storage of block and the data volume of the target data is cached in the storage medium
Target data is stored in corresponding caching block in the storage medium.
The third aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer storage medium
It is stored with computer program, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor
State the method that processor executes above-mentioned first aspect.
Existing beneficial effect is the embodiment of the present invention compared with prior art:
The embodiment of the present invention is calculated by obtaining the target data for needing to cache and its data information according to data information
The storage level coefficient of the target data, and be determined to store the storage medium of target data according to the storage level coefficient,
Secondly, the target data is stored according to the amount of storage of different memory areas block in the data volume of the target data and storage medium
Into the storage medium in corresponding caching block.In this way, it ensure that the target data of different data informations can
To store in the caching block into suitable storage medium, under the premise of efficiently using the caching block of storage medium, mention
The high efficiency of data storages and release, and the data response speed in caching.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the flow chart of the multi-level buffer method for the data that the embodiment of the present invention one provides;
Fig. 2 is the specific implementation flow chart of the multi-level buffer method and step S102 of data provided by Embodiment 2 of the present invention;
Fig. 3 is the flow chart of the multi-level buffer method for the data that the embodiment of the present invention three provides;
Fig. 4 is the schematic diagram for the terminal device that the embodiment of the present invention four provides;
Fig. 5 is the schematic diagram for the terminal device that the embodiment of the present invention five provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, in case unnecessary details interferes description of the invention.
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
It is the flow chart of the multi-level buffer method for the data that the embodiment of the present invention one provides referring to Fig. 1, Fig. 1.The present embodiment
The executing subject of the multi-level buffer method of middle data includes but is not limited to the devices such as computer, server, these devices all have
Data buffer storage function.The multi-level buffer method of data as shown in Figure 1 may comprise steps of:
In S101, the target data for needing to cache and its data information are obtained;It include the mesh in the data information
Mark the data volume of data.
In many application software or the use process of webpage, there can be more application data, with application software
It becomes increasingly complex with the content information of webpage, number of users and amount of access are increasing, and application software and webpage are needed while being propped up
More concurrency are supportted, therefore, the calculating that application server and database server are done is also more and more.But it often applies
Server resource is limited, database is per second can receive that request number of times is limited, read-write of file is also limited.It would therefore be desirable to have effects
Handling capacity as big as possible is provided using limited resource, an effective method is exactly to introduce caching, is asked in each link
Target data can be directly acquired and be returned from caching by asking, to reduce calculation amount, effectively promoted response speed, allowed limited
The more users of resource service.
Data buffer storage is usually the high-speed memory in hard drive internal, will be some just as a block buffer in computer
Data are temporary to be saved for reading and reading again.Have for the hard disk of big data caching when accessing fragmented file
Very big advantage.The request of user is from application interface to transmission of network, application service to storage, then returnes to interface presentation again
Content.By the way that the application data of first time request are put into memory, next time shows the data that the page first saves last time
It shows, while going request data, request is completed to refresh display new data, and it is cached again, so just can be very big
Reduction data request amount.
Obtain the target data for needing to cache, wherein target data may include that application software or webpage were being run
Application data in journey.The data information of target data may include data type, Data Identification and the data of target data
The information such as amount.It is suitable slow to be determined according to the data information by obtaining the target data for needing to cache and its data information
Deposit space storage target data.
In S102, the caching equivalent coefficient of the target data is calculated according to the data information, and according to described slow
It deposits equivalent coefficient and determines the storage medium for storing the target data;The caching of different amount of storage is preset in the storage medium
Block.
In different storage mediums, the data type of storage and the mode of storing data are different.Illustratively, it calculates
The caching of machine often uses random access memory (Random Access Memory, RAM), so after being finished still
File can be sent in the memories such as hard disk and be permanently stored.Maximum caching is memory bar in computer, and most fast is centre
L1 and the L2 caching inlayed in reason device (Central Processing Unit, CPU), the video memory of video card is to give video card operation core
The caching of piece also has the caching of 16M or 32M on hard disk.Its working principle is that: when CPU will read data, first from
It is searched in cpu cache, finds and just read and give CPU processing immediately;The just memory relatively slow from rate if it is not found,
Middle reading simultaneously gives CPU processing, while the data block where this data is called in caching, to monolith number after can making
According to reading carried out all from caching, it is not necessary to recall memory.Such reading mechanism CPU reads the hit rate of caching very
It is high, that is to say, that all in cpu cache, only about 10% needs to read from memory the CPU data 90% to be read next time.
The time that CPU directly reads memory is greatly saved in this, also makes CPU when reading data substantially without waiting.
It is page cache there are also a kind of caching mechanism, page cache is that dynamic page is directly generated to the static page to be placed on
Server end, when user transfers same page, static page will be directly downloaded to client, it is no longer necessary to pass through the operation of program
With the access of database, the load of server is greatly saved.When each accession page, the corresponding caching page can be detected whether
In the presence of, if it does not exist, then connects database and obtain the data render page and generate caching page file, the page accessed next time in this way
Face file just plays a role.
The caching of database is generally provided by database, can establish cache to table.In database, user may be more
Secondary to execute identical query statement, in order to improve search efficiency, database can divide a special region in memory, for depositing
The inquiry that user executes recently is put, this block region is exactly to cache.If have in some project the table infrequently changed and
When a large amount of same queries of the server by this table, data buffer storage can play good effect.Under normal conditions, needle
To the application of global wide area network (World WideWeb, Web) i.e., effect can be obvious.As there is one in the database now
Product information table.The user of enterprise needs to inquire the information of product by webpage.If in system design, default query
The result is that the nearest month product information traded of display.It, will when so each user is by default situations inquiry product information
Information will be obtained from caching, the speed of system queries will be than very fast at this time.
Browser rs cache mechanism is mainly hypertext transfer protocol (HyperText Transfer Protocol, HTTP)
The caching mechanism of protocol definition, such as Expires, Cache-control field further include the caching machine that non-http protocol defines
System, for example, using HTML Meta label.Wherein, Expires is web server response message header field, is asked in response HTTP
Tell browser that browser can be directly from browser rs cache access evidence, without requesting again before expired time when asking.
Cache-Control is consistent with the effect of Expires, is all the validity period for indicating Current resource, and whether control browser is direct
From browser rs cache access according to still hair requests to server evidence of fetching again.
Browser can cache the resource that it was browsed, such as webpage, picture etc., if resource, within the shelf-life, that is next
Same request is directly with caching.After expired, the modification time of resource last time can be taken, by server to determine whether losing
Effect, new data will be returned to browser and continue to cache if failure.The caching of browser, there are user computers
On hard disk, user is read using browser be first loaded into memory data cached on hard disk when caching every time, then read to
Browser.The rule of browser end caching mainly defines in the meta label of http header and HTML.They are respectively from new
It is the direct data using in caching that two dimensions of freshness and check value, which carry out regulation browser, it is desired nonetheless to source server be gone to obtain
The version of update.It is data cached to must satisfy the following conditions, browsing there are the data cached shelf-life in browser rs cache mode
Device will be considered that it be it is effective, it is new enough: containing complete expired time control head information, such as http protocol heading,
And still before the deadline;Browser had used this data cached, and in upper primary session, that is, on user
When once accessing the data, its freshness is crossed on inspection.Meet one kind of two above situation, browser can be directly from caching
It is middle to obtain data cached and render to browser.
Further, the place to delta data can be realized by the authentication mechanism of check value in browser rs cache mode
Reason, server returned data when, take the entity tag of this resource in head information sometimes, it can be used as clear
Look at the checking mark of device request process again.Discovery checking mark mismatch is such as crossed, illustrates that data are modified or expired, it is clear
Device of looking at needs to reacquire data content.
The caching equivalent coefficient that target data is calculated according to data information, by determining target according to the caching equivalent coefficient
The storage medium of data, determination are stored that data in which storage medium.
Optionally, the caching equivalent coefficient of target data can be determined according to the data volume and significance level of target data,
And the storage medium for storing the target data is determined according to the caching equivalent coefficient.
Storage medium can there are many kinds of, but respectively have the characteristics that different, also corresponding is for different attribute
Data.For example, client-cache support dynamic configuration, can be uniformly controlled by back office interface, be suitable for data update infrequently but
The very big data of amount of access.
In S103, according to the data of the amount of storage and the target data of caching block in the storage medium
Amount, is stored in corresponding caching block in the storage medium for the target data.
After storage medium has been determined, if directly storing target data into the storage medium, it will cause very much
Fragmentation, and in invocation target data later it is possible that delay.Therefore, it is necessary to be stored in storage to these data
The spatial cache of medium carries out unified management, and target data is stored into suitable caching block.
It in the present embodiment, is to divide storage medium according to the size of amount of storage.By in storage medium
Different size of amount of storage is set, storage medium is managed collectively, to store the data of suitable size.
Further, step S103 can be specifically included:
The smallest caching block capacity in the storage medium is set to be incremented by as Cap_blo (min) according to upscaling factor A
Mode set the caching block capacity of i-th of caching block as Cap_blo (i)=Ai-1Cap_blo(min);
If the data quantity C ap_blo (pac) of the target data meets:
Ai-2Cap_blo(min)≤Cap_blo(pac)≤Ai-1The target data, then be stored in by Cap_blo (min)
In i-th of caching block in the storage medium.
Specifically, set in the storage medium the smallest caching block capacity as Cap_blo (min), according to it is incremental because
Sub- A incremental mode sets the caching block capacity of i-th of caching block as Cap_blo (i)=Ai-1Cap_blo(min).Its
It is substantially with Cap_blo (min) for first place, using A as the Geometric Sequence of common ratio.Being determined according to each block buffer memory capacity should
Total buffer memory capacity of storage medium, the as summation of Geometric Sequence:
Further, the block capacity of storage medium can also be set according to the relationship of multiple.For example, setting the storage
The smallest caching block capacity is Cap_blo (min) in medium, sets i-th of caching block in such a way that multiple is incremented by
Cache block capacity are as follows: Cap_blo (i)=iCap_blo (min), its essence is with Cap_blo (min) be first term, with
Cap_blo (min) is the buffer memory capacity of the model split storage medium of arithmetic progression.By in storage medium according to incremental
The different size of caching block of model split, is stored in different size of data in corresponding block, it is possible to reduce storage is situated between
Memory fragmentation in matter, effectively manages spatial cache, maximizes spatial cache.
Above scheme calculates this according to data information by obtaining the target data for needing to cache and its data information
The storage level coefficient of target data, and be determined to store the storage medium of target data according to the storage level coefficient,
It is secondary, according to the amount of storage of different memory areas block in the data volume of the target data and storage medium, by the target data store to
In the storage medium in corresponding caching block.In this way, it ensure that the target data of different data informations can be with
It stores in the caching block into suitable storage medium, under the premise of efficiently using the caching block of storage medium, improves
The efficiency of data storage and release, and the data response speed in caching.
Referring to fig. 2, Fig. 2 is the specific implementation flow of the multi-level buffer method S102 for the data that the embodiment of the present invention one provides
Figure, wherein when the data information further includes the creation time of the target data, the last newest access being accessed
Between, the last data renewal time being updated.The multi-level buffer method of data as shown in Figure 2 may include following step
It is rapid:
In S201, according to the items of current time and the target data data information, the target is calculated
The caching equivalent coefficient of data.
If desired one group of target data is stored in storage medium, then needs first to determine the data information of this group of data.
In the present embodiment, data information may include the creation time of target data, newest access time, data renewal time, when
Preceding time and data volume.
Wherein, the creation time of target data is used to indicate the generation time of newest target data, due to applying journey
The new data of many types may be generated in sequence and webpage operational process, new data can cover original legacy data, therefore, need
Determine the generation time of target data.Newest access time is used to indicate what target data the last time was accessed or called
Time, for measuring the freshness and frequency of use of data.Data renewal time runs for indicating in application program and webpage
In the process, the modification or update of data are had certainly, therefore, data renewal time for indicating to same data type or
The renewal time of the data of same Data Identification.Current time calculates target data for indicating after getting target data
Caching equivalent coefficient at the time of.Data volume is used to indicate to need the size of the data volume of the target data cached, for embodying
The target data is by the size of the memory space of the storage medium of occupancy.
Further, in the present embodiment, the Data Identification that can first determine target data is determined according to the Data Identification
Data corresponding to the Data Identification in caching compare the data in target data and caching, if there is no becoming
Change, then target data is not put into caching, continues to use original data cached;If changing, it is determined that changed
The numerical value or situation of change of target data store the data of changing unit into storage medium according to situation of change.
In creation time, newest access time, data renewal time, the current time and described that target data has been determined
After data volume, the caching equivalent coefficient of the target data is calculated by following formula:
Wherein, Qua is used to indicate the data volume of the target data;Time_cre is for indicating the creation time;
Time_cur is for indicating the current time;Time_las is for indicating the newest access time;Time_mod is used for table
Show the data renewal time.
Specifically, if target data has biggish data volume, creation time longer and newest access time smaller,
Target data caching equivalent coefficient with higher, can store it in higher ranked storage medium;Meanwhile if
The data volume of target data is smaller, creation time shorter and newest access time is longer, then the target data has lower slow
Equivalent coefficient is deposited, can be stored it in junior storage medium.
In S202, the target data that the caching equivalent coefficient is greater than preset first cache threshold is stored to described
In client-cache;The caching equivalent coefficient is less than first cache threshold and is greater than preset second cache threshold
Target data is stored into virtual machine caching;The caching equivalent coefficient is less than to the number of targets of second cache threshold
According to store to the redis caching in.
After the caching equivalent coefficient of target data has been determined, by presetting two grade threshold G1、G2;And G1>
G2, obtained G value is compared with two threshold values, according to the characteristic of each storage medium, by different caching equivalent coefficients
Target data is stored into corresponding storage medium.Wherein, client-cache supports dynamic configuration, can be controlled by back office interface is unified
System, client-cache applicable data update infrequently but the very big interface of amount of access;Since caching of the redis to blob is supported
It is not good enough, and compare the resource of consumption data input and output, then the defect of redis caching is made up by virtual machine caching.
Specifically, if: G1< G, by data buffer storage in client-cache;G1=G, by data buffer storage in client-cache
And/or in virtual machine caching;G2<G<G1, by data buffer storage in virtual machine caching;G=G2, data buffer storage is delayed in virtual machine
Deposit and/or redis caching in;G<G2, by data buffer storage in redis caching.
Further, G after defining the level every time1、G2Value will all be updated according to the remaining cache space of each storage medium,
It is at dynamic equilibrium.When spatial cache is low, increase G by rule1、G2Value makes less file reach caching and requires, and to the greatest extent
Quick release puts spatial cache;When spatial cache abundance, reduce G by rule1、G2Value, so that the data read can enter in caching.
Further, since database has more storage resource, these target datas can be stored simultaneously
Into database.It is provided by database, the cache that tables of data can be established.The data of database are temporarily held in one
On position, same request directly returns to this data again, inquires various table access evidences without going again, subtracts
The time for looking into database, raising efficiency are lacked.Not all historical record is all cached, and to have strategy, for example only delay
Deposit bimestrial data, and have before two months it is requested after no longer request to recycle when the data, be exactly
This record is erased, and that repeatedly requests nearby can just save.The not high preferential removing of overlong time, utilization rate, otherwise cache
The essence and meaning of caching are just lost too much.
In the database, data are stored in disk.Although database layer has done corresponding caching, this number
According to library level caching generally directed to be inquiry content, when generally data do not change only in table, database is corresponding
Caching just played effect.Sometimes not can be reduced increasing that operation system generates database, delete, look into, change products it is raw huge
Pressure.At this point, general way is one cache server of increase between database and service server, for example we are familiar with
Redis.The data of client request for the first time have just been put into redis after taking out from database, and data are not expired or not more
Under the premise of changing, data are directly taken in request next time all from redis, do so the pressure for greatly alleviating database.
Above scheme, by according to the creation time of target data, newest access time, data renewal time, it is current when
Between and data volume, calculate the caching equivalent coefficient of target data, and by the caching equivalent coefficient and preset cache threshold
Comparison, determines the storage medium that the target data can be stored.Deposit the corresponding target data of different caching equivalent coefficients
Storage is in different storage mediums, to call or handle different types of target data by corresponding storage medium, improves
The processing and response speed of data.
It is the flow chart of the multi-level buffer method for the data that the embodiment of the present invention one provides referring to Fig. 3, Fig. 3.As shown in Figure 3
The multi-level buffer methods of data may comprise steps of:
In S301, the call request of target data is received;It include the data of the target data in the call request
Information.
When user initiates a request of data by browser, browser can obtain number by the following steps
According to: the local cache stage: it is first locally to search the data, if being found the data, and the data have not expired, and just make
With this data, HTTP request will not be sent to server;Negotiate the caching stage: if finding corresponding data in local cache,
But do not know whether the data expired, then a HTTP request is sent out to server, then server judges this request, if
For the data of request on the server without modified or expired, then returning to 304 status codes (can be understood as server to browser
Secret signal), allow browser locally finding the data;The cache failure stage: when the resource of server discovery request is modified
It crosses or this is a new request, server then returns to the data, and returns to 200 status codes, and the premise of this process refers to
The data are found, if not having data on server, return to 404.
It in the present embodiment, include the data information of target data in the call request of received target data,
In, data information may include: the creation time of target data, newest access time, data renewal time, current time and
The data volume.Which storage medium and caching block be stored in for target data required for determining by these information
In.
In S302, the storage medium and buffer area therein for storing the target data are determined according to the data information
Block.
In practical applications, after storing target data into storage medium, then can front end need in the case where,
Target data required for being called from storage medium.Under normal circumstances, structured query language (Structured can be passed through
Query Language, SQL) sentence inquiry target data still want two identical SQL statements can be from storage
Corresponding target data is found in medium.Condition difference or the field used are different, and Database Systems not will use slow
It deposits to carry out query optimization.In addition for SQL statement parsing, the realistic sensitivity of size, that is to say, that same inquiry
Sentence also can be considered as having used different SQL statements if the case sensitive of its keyword.In response to this,
In the present embodiment, by automatically determining the storage location of target data, to carry out the lookup and calling of target data.
After getting the call request of target data, data information is determined according to the call request, and according to the number
It is believed that breath calculates the caching equivalent coefficient of required target data.In order to guarantee the unification for the data address inquired and stored,
Specific caching equivalent coefficient calculation method is not repeated herein with the implementation method in the step S201 in embodiment two.?
It is calculated after the caching equivalent coefficient of target data, the storage medium of target data is determined by preset grade threshold,
It is specific to determine that method with step S202, is not repeated herein.After determining storage medium, according to the data volume of target data
And the block capacity of the caching block in storage medium, determine that target data in the caching block in the storage medium, has
The determination method of body is not repeated herein with the implementation method in step S103.
In S303, the target data is called from the caching block of the storage medium.
In practical applications, after storing target data into storage medium, then can front end need in the case where,
Target data required for being called from storage medium.Under normal circumstances, number of targets can be inquired by structuring SQL statement
According to still, two identical SQL statements that can find corresponding target data from storage medium.Condition is not
Field that is same or using is different, and Database Systems not will use caching to carry out query optimization.In addition MySQL database with
Other databases are different, for SQL statement parsing, the realistic sensitivity of size.That is same query statement,
It also can be considered as having used different SQL statements if the case sensitive of its keyword.In response to this, this reality
It applies in example, by automatically determining the storage location of target data, to carry out the lookup and calling of target data.
In the present embodiment, after the storage medium and caching block therein that storage target data has been determined, from this
Corresponding target data is called directly in the address of caching block in storage medium.
Further, since database has more storage resource, these target datas can be stored simultaneously
Into database.But when calling data, corresponding storage medium and caching block can be first determined, and in the storage medium
Caching block in search data, only when the above caching all fails or do not enable, interface can just access database, in this way may be used
Largely to mitigate the pressure of database, while promoting the response speed of interface.
Above scheme, after receiving the call request of target data, according to the number of the target data in call request
It is believed that breath determines the storage medium and caching block therein for storing the target data, finally directly from the buffer area of storage medium
Invocation target data in block.The uncertainty for searching target data by entry search statement is avoided, target data is improved
The efficiency called or discharged, while also improving the calling efficiency of caching.
Referring to fig. 4, Fig. 4 is a kind of schematic diagram for terminal device that one embodiment of the invention provides.Terminal device 400 can be with
For equipment such as computer, servers, which has the function of the multi-level buffer of data.The device 400 of the present embodiment includes
Each unit be used to execute each step in the corresponding embodiment of Fig. 1, referring specifically in the corresponding embodiment of Fig. 1 and Fig. 1
Associated description does not repeat herein.The device 400 of the present embodiment includes: data capture unit 401, medium determining unit 402 or number
According to storage unit 403.
Data capture unit 401, for obtaining the target data for needing to cache and its data information;In the data information
Data volume including the target data;
Medium determining unit 402, for calculating the caching equivalent coefficient of the target data according to the data information, and
The storage medium for storing the target data is determined according to the caching equivalent coefficient;Difference is preset in the storage medium to deposit
The caching block of reserves;
Data storage cell 403, for according to the amount of storage and the number of targets for caching block in the storage medium
According to the data volume, the target data is stored in corresponding caching block in the storage medium.
The storage medium includes client-cache, virtual machine caching and redis caching;
Further, the medium determining unit 402 may include:
Coefficient calculation unit is calculated for the items data information according to current time and the target data
The caching equivalent coefficient of the target data;
Index contrast unit, the target data for the caching equivalent coefficient to be greater than preset first cache threshold are deposited
Storage is into the client-cache;The caching equivalent coefficient is less than first cache threshold and is greater than preset second and is delayed
The target data for depositing threshold value is stored into virtual machine caching;The caching equivalent coefficient is less than second cache threshold
Target data store to the redis caching in.
Further, the coefficient calculation unit specifically can be used for:
The caching equivalent coefficient of the target data is calculated by following formula:
Wherein, Qua is used to indicate the data volume of the target data;Time_cre is for indicating the creation time;
Time_cur is for indicating the current time;Time_las is for indicating the newest access time;Time_mod is used for table
Show the data renewal time.
Further, the data storage cell 403 may include:
Capacity setup unit, for setting in the storage medium the smallest caching block capacity as Cap_blo (min),
The caching block capacity of i-th of caching block is Cap_blo (i)=Ai-1Cap_blo(min);
Block storage unit, if the data quantity C ap_blo (pac) for the target data meets:
Ai-2Cap_blo(min)≤Cap_blo(pac)≤Ai-1The target data, then be stored in by Cap_blo (min)
In i-th of caching block in the storage medium;Wherein, A is for indicating the ratio between caching block capacity of adjacent size.
Further, the terminal device can also include:
Request of data unit, for receiving the call request of target data;It include the number of targets in the call request
According to data information;
Determination unit is stored, for according to the determining storage medium for storing the target data of the data information and wherein
Caching block;
Data call unit, for calling the target data from the caching block of the storage medium.
Above scheme calculates this according to data information by obtaining the target data for needing to cache and its data information
The storage level coefficient of target data, and be determined to store the storage medium of target data according to the storage level coefficient,
It is secondary, according to the amount of storage of different memory areas block in the data volume of the target data and storage medium, by the target data store to
In the storage medium in corresponding caching block.In this way, it ensure that the target data of different data informations can be with
It stores in the caching block into suitable storage medium, under the premise of efficiently using the caching block of storage medium, improves
The efficiency of data storage and release, and the data response speed in caching.
Fig. 5 is the schematic diagram for the terminal device that one embodiment of the invention provides.As shown in figure 5, the terminal of the embodiment is set
Standby 5 include: processor 50, memory 51 and are stored in the meter that can be run in the memory 51 and on the processor 50
Calculation machine program 52.The processor 50 realizes that the multi-level buffer method of above-mentioned each data is real when executing the computer program 52
Apply the step in example, such as step 101 shown in FIG. 1 is to 103.Alternatively, the processor 50 executes the computer program 52
The function of each module/unit in the above-mentioned each Installation practice of Shi Shixian, such as the function of unit 401 to 403 shown in Fig. 4.
Illustratively, the computer program 52 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 51, and are executed by the processor 50, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 52 in the terminal device 5 is described.
The terminal device 5 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 50, memory 51.It will be understood by those skilled in the art that Fig. 5
The only example of terminal device 5 does not constitute the restriction to terminal device 5, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 50 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 51 can be the internal storage unit of the terminal device 5, such as the hard disk or interior of terminal device 5
It deposits.The memory 51 is also possible to the External memory equipment of the terminal device 5, such as be equipped on the terminal device 5
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card, FC) etc..Further, the memory 51 can also have been deposited both the inside including the terminal device 5
Storage unit also includes External memory equipment.The memory 51 is for storing the computer program and terminal device institute
Other programs and data needed.The memory 51 can be also used for temporarily storing the number that has exported or will export
According to.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of multi-level buffer method of data characterized by comprising
Obtain the target data for needing to cache and its data information;It include the data of the target data in the data information
Amount;
The caching equivalent coefficient of the target data is calculated according to the data information, and is determined according to the caching equivalent coefficient
Store the storage medium of the target data;The caching block of different amount of storage is preset in the storage medium;
According to the amount of storage of block and the data volume of the target data is cached in the storage medium, by the target
Data are stored in corresponding caching block in the storage medium.
2. the multi-level buffer method of data as described in claim 1, which is characterized in that the storage medium includes that client is slow
It deposits, virtual machine caching and redis cache;The caching grade system that the target data is calculated according to the data information
Number, and the storage medium for storing the target data is determined according to the caching equivalent coefficient, comprising:
According to the items of current time and the target data data information, the caching grade of the target data is calculated
Coefficient;
The target data that the caching equivalent coefficient is greater than preset first cache threshold is stored into the client-cache;
The caching equivalent coefficient is less than first cache threshold and the target data for being greater than preset second cache threshold stores
Into virtual machine caching;The target data that the caching equivalent coefficient is less than second cache threshold is stored to described
In redis caching.
3. the multi-level buffer method of data as claimed in claim 2, which is characterized in that the data information further includes the mesh
Mark creation time, newest access time and the data renewal time of data;It is described according to current time and the target data
The items data information, comprising:
The caching equivalent coefficient of the target data is calculated by following formula:
Wherein, Qua is used to indicate the data volume of the target data;Time_cre is for indicating the creation time;Time_
Cur is for indicating the current time;Time_las is for indicating the newest access time;Time_mod is for indicating described
Data renewal time.
4. the multi-level buffer method of data as described in claim 1, which is characterized in that described to delay according in the storage medium
The amount of storage of block and the data volume of the target data are deposited, the target data is stored in the storage medium
Corresponding caching block, comprising:
The smallest caching block capacity is set in the storage medium as Cap_blo (min), the buffer area of i-th of caching block
Block capacity is Cap_blo (i)=Ai-1Cap_blo(min);
If the data quantity C ap_blo (pac) of the target data meets:
Ai-2Cap_blo(min)≤Cap_blo(pac)≤Ai-1The target data is then stored in described by Cap_blo (min)
In i-th of caching block in storage medium;Wherein, A is for indicating the ratio between caching block capacity of adjacent size.
5. the multi-level buffer method of data according to any one of claims 1-4, which is characterized in that the method also includes:
Receive the call request of target data;It include the data information of the target data in the call request;
The storage medium and caching block therein for storing the target data are determined according to the data information;
The target data is called from the caching block of the storage medium.
6. a kind of terminal device, which is characterized in that including memory and processor, being stored in the memory can be described
The computer program run on processor, which is characterized in that when the processor executes the computer program, realize following step
It is rapid:
Obtain the target data for needing to cache and its data information;It include the data of the target data in the data information
Amount;
The caching equivalent coefficient of the target data is calculated according to the data information, and is determined according to the caching equivalent coefficient
Store the storage medium of the target data;The caching block of different amount of storage is preset in the storage medium;
According to the amount of storage of block and the data volume of the target data is cached in the storage medium, by the target
Data are stored in corresponding caching block in the storage medium.
7. terminal device as claimed in claim 6, which is characterized in that the storage medium includes client-cache, virtual machine
Caching and redis caching;The caching equivalent coefficient that the target data is calculated according to the data information, and according to institute
It states caching equivalent coefficient and determines the storage medium for storing the target data, comprising:
According to the items of current time and the target data data information, the caching grade of the target data is calculated
Coefficient;
The target data that the caching equivalent coefficient is greater than preset first cache threshold is stored into the client-cache;
The caching equivalent coefficient is less than first cache threshold and the target data for being greater than preset second cache threshold stores
Into virtual machine caching;The target data that the caching equivalent coefficient is less than second cache threshold is stored to described
In redis caching.
8. terminal device as claimed in claim 7, which is characterized in that the data information further includes the wound of the target data
Build time, newest access time and data renewal time;It is described according to the items of current time and the target data
Data information, comprising:
The caching equivalent coefficient of the target data is calculated by following formula:
Wherein, Qua is used to indicate the data volume of the target data;Time_cre is for indicating the creation time;Time_
Cur is for indicating the current time;Time_las is for indicating the newest access time;Time_mod is for indicating described
Data renewal time.
9. the multi-level buffer method of data as claimed in claim 6, which is characterized in that it is described according in the storage medium not
With the amount of storage of caching block and the data volume of the target data, determines and the target data is stored in described deposit
Corresponding caching block in storage media, comprising:
The smallest caching block capacity is set in the storage medium as Cap_blo (min), the buffer area of i-th of caching block
Block capacity is Cap_blo (i)=Ai-1Cap_blo(min);
If the data quantity C ap_blo (pac) of the target data meets:
Ai-2Cap_blo(min)≤Cap_blo(pac)≤Ai-1The target data is then stored in described by Cap_blo (min)
In i-th of caching block in storage medium;Wherein, A is for indicating the ratio between caching block capacity of adjacent size.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811038785.6A CN109240946A (en) | 2018-09-06 | 2018-09-06 | The multi-level buffer method and terminal device of data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811038785.6A CN109240946A (en) | 2018-09-06 | 2018-09-06 | The multi-level buffer method and terminal device of data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109240946A true CN109240946A (en) | 2019-01-18 |
Family
ID=65067502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811038785.6A Pending CN109240946A (en) | 2018-09-06 | 2018-09-06 | The multi-level buffer method and terminal device of data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109240946A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059023A (en) * | 2019-04-04 | 2019-07-26 | 阿里巴巴集团控股有限公司 | A kind of method, system and equipment refreshing cascade caching |
CN110119338A (en) * | 2019-04-30 | 2019-08-13 | 广州微算互联信息技术有限公司 | A kind of acquisition methods, system and the storage medium of game monitor parameter |
CN110362400A (en) * | 2019-06-17 | 2019-10-22 | 中国平安人寿保险股份有限公司 | Distribution method, device, equipment and the storage medium of caching resource |
CN111694769A (en) * | 2019-03-15 | 2020-09-22 | 上海寒武纪信息科技有限公司 | Data reading method and device |
CN111831699A (en) * | 2020-09-21 | 2020-10-27 | 北京新唐思创教育科技有限公司 | Data caching method, electronic equipment and computer readable medium |
CN111984609A (en) * | 2020-08-19 | 2020-11-24 | 北京龙鼎源科技股份有限公司 | Data storage method, data storage device, storage medium and processor |
CN112035529A (en) * | 2020-09-11 | 2020-12-04 | 北京字跳网络技术有限公司 | Caching method and device, electronic equipment and computer readable storage medium |
WO2021003921A1 (en) * | 2019-07-10 | 2021-01-14 | 平安科技(深圳)有限公司 | Data processing method, and terminal device |
US10922236B2 (en) | 2019-04-04 | 2021-02-16 | Advanced New Technologies Co., Ltd. | Cascade cache refreshing |
CN112631517A (en) * | 2020-12-24 | 2021-04-09 | 北京百度网讯科技有限公司 | Data storage method and device, electronic equipment and storage medium |
CN112882646A (en) * | 2019-11-29 | 2021-06-01 | 北京金山云网络技术有限公司 | Resource caching method and device, electronic equipment and storage medium |
CN117370691A (en) * | 2023-10-08 | 2024-01-09 | 北京安锐卓越信息技术股份有限公司 | Page loading method and device, medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10289219A (en) * | 1997-02-14 | 1998-10-27 | N T T Data:Kk | Client-server system, cache management method and recording medium |
US8117396B1 (en) * | 2006-10-10 | 2012-02-14 | Network Appliance, Inc. | Multi-level buffer cache management through soft-division of a uniform buffer cache |
CN103778071A (en) * | 2014-01-20 | 2014-05-07 | 华为技术有限公司 | Cache space distribution method and device |
CN107430551A (en) * | 2015-12-01 | 2017-12-01 | 华为技术有限公司 | Data cache method, memory control device and storage device |
CN107644020A (en) * | 2016-07-20 | 2018-01-30 | 平安科技(深圳)有限公司 | Data storage and the method and device called |
US20180095682A1 (en) * | 2016-10-03 | 2018-04-05 | International Business Machines Corporation | Profile-based data-flow regulation to backend storage volumes |
CN108491450A (en) * | 2018-02-26 | 2018-09-04 | 平安普惠企业管理有限公司 | Data cache method, device, server and storage medium |
-
2018
- 2018-09-06 CN CN201811038785.6A patent/CN109240946A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10289219A (en) * | 1997-02-14 | 1998-10-27 | N T T Data:Kk | Client-server system, cache management method and recording medium |
US8117396B1 (en) * | 2006-10-10 | 2012-02-14 | Network Appliance, Inc. | Multi-level buffer cache management through soft-division of a uniform buffer cache |
CN103778071A (en) * | 2014-01-20 | 2014-05-07 | 华为技术有限公司 | Cache space distribution method and device |
CN107430551A (en) * | 2015-12-01 | 2017-12-01 | 华为技术有限公司 | Data cache method, memory control device and storage device |
CN107644020A (en) * | 2016-07-20 | 2018-01-30 | 平安科技(深圳)有限公司 | Data storage and the method and device called |
US20180095682A1 (en) * | 2016-10-03 | 2018-04-05 | International Business Machines Corporation | Profile-based data-flow regulation to backend storage volumes |
CN108491450A (en) * | 2018-02-26 | 2018-09-04 | 平安普惠企业管理有限公司 | Data cache method, device, server and storage medium |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111694769A (en) * | 2019-03-15 | 2020-09-22 | 上海寒武纪信息科技有限公司 | Data reading method and device |
CN110059023A (en) * | 2019-04-04 | 2019-07-26 | 阿里巴巴集团控股有限公司 | A kind of method, system and equipment refreshing cascade caching |
CN110059023B (en) * | 2019-04-04 | 2020-11-10 | 创新先进技术有限公司 | Method, system and equipment for refreshing cascade cache |
US10922236B2 (en) | 2019-04-04 | 2021-02-16 | Advanced New Technologies Co., Ltd. | Cascade cache refreshing |
CN110119338A (en) * | 2019-04-30 | 2019-08-13 | 广州微算互联信息技术有限公司 | A kind of acquisition methods, system and the storage medium of game monitor parameter |
CN110362400A (en) * | 2019-06-17 | 2019-10-22 | 中国平安人寿保险股份有限公司 | Distribution method, device, equipment and the storage medium of caching resource |
CN110362400B (en) * | 2019-06-17 | 2022-06-17 | 中国平安人寿保险股份有限公司 | Resource cache allocation method, device, equipment and storage medium |
WO2021003921A1 (en) * | 2019-07-10 | 2021-01-14 | 平安科技(深圳)有限公司 | Data processing method, and terminal device |
CN112882646A (en) * | 2019-11-29 | 2021-06-01 | 北京金山云网络技术有限公司 | Resource caching method and device, electronic equipment and storage medium |
CN111984609A (en) * | 2020-08-19 | 2020-11-24 | 北京龙鼎源科技股份有限公司 | Data storage method, data storage device, storage medium and processor |
CN112035529A (en) * | 2020-09-11 | 2020-12-04 | 北京字跳网络技术有限公司 | Caching method and device, electronic equipment and computer readable storage medium |
CN111831699A (en) * | 2020-09-21 | 2020-10-27 | 北京新唐思创教育科技有限公司 | Data caching method, electronic equipment and computer readable medium |
CN111831699B (en) * | 2020-09-21 | 2021-01-08 | 北京新唐思创教育科技有限公司 | Data caching method, electronic equipment and computer readable medium |
CN112631517A (en) * | 2020-12-24 | 2021-04-09 | 北京百度网讯科技有限公司 | Data storage method and device, electronic equipment and storage medium |
CN112631517B (en) * | 2020-12-24 | 2021-09-03 | 北京百度网讯科技有限公司 | Data storage method and device, electronic equipment and storage medium |
CN117370691A (en) * | 2023-10-08 | 2024-01-09 | 北京安锐卓越信息技术股份有限公司 | Page loading method and device, medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109240946A (en) | The multi-level buffer method and terminal device of data | |
US11349940B2 (en) | Server side data cache system | |
CN105205014B (en) | A kind of date storage method and device | |
US8224804B2 (en) | Indexing of partitioned external data sources | |
CN101493826B (en) | Database system based on WEB application and data management method thereof | |
CN101576918B (en) | Data buffering system with load balancing function | |
JP5744707B2 (en) | Computer-implemented method, computer program, and system for memory usage query governor (memory usage query governor) | |
US20090254594A1 (en) | Techniques to enhance database performance | |
US9774676B2 (en) | Storing and moving data in a distributed storage system | |
US20110264759A1 (en) | Optimized caching for large data requests | |
CN111737168A (en) | Cache system, cache processing method, device, equipment and medium | |
EP3049940B1 (en) | Data caching policy in multiple tenant enterprise resource planning system | |
US20120224482A1 (en) | Credit feedback system for parallel data flow control | |
US10404823B2 (en) | Multitier cache framework | |
CN109767274B (en) | Method and system for carrying out associated storage on massive invoice data | |
CN114730312A (en) | Managed materialized views created from heterogeneous data sources | |
US10146833B1 (en) | Write-back techniques at datastore accelerators | |
CN114443680A (en) | Database management system, related apparatus, method and medium | |
US11429311B1 (en) | Method and system for managing requests in a distributed system | |
CN104391947B (en) | Magnanimity GIS data real-time processing method and system | |
CN114443615A (en) | Database management system, related apparatus, method and medium | |
US11609910B1 (en) | Automatically refreshing materialized views according to performance benefit | |
US20140258216A1 (en) | Management of searches in a database system | |
CN116680295A (en) | Method, system and device for processing data by multiple databases | |
US11537616B1 (en) | Predicting query performance for prioritizing query execution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |