FR3100630A1 - Validation de la mise en mémoire-cache étendue et du temps de requête - Google Patents
Validation de la mise en mémoire-cache étendue et du temps de requête Download PDFInfo
- Publication number
- FR3100630A1 FR3100630A1 FR1909794A FR1909794A FR3100630A1 FR 3100630 A1 FR3100630 A1 FR 3100630A1 FR 1909794 A FR1909794 A FR 1909794A FR 1909794 A FR1909794 A FR 1909794A FR 3100630 A1 FR3100630 A1 FR 3100630A1
- Authority
- FR
- France
- Prior art keywords
- search result
- computed
- cache
- computed search
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010200 validation analysis Methods 0.000 title claims abstract description 24
- 230000004044 response Effects 0.000 claims abstract description 38
- 238000011156 evaluation Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 31
- 230000032683 aging Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 7
- 230000015654 memory Effects 0.000 description 19
- 230000000694 effects Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 7
- 230000001934 delay Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000013500 data storage Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24539—Query rewriting; Transformation using cached or materialised query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3446—Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3453—Special cost functions, i.e. other than distance or default speed limit of road segments
- G01C21/3492—Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3617—Destination input or retrieval using user history, behaviour, conditions or preferences, e.g. predicted or inferred from previous use or current movement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Software Systems (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Dans un environnement informatique distribué comprenant un système frontal avec une plateforme de recherche ayant une mémoire-cache de résultats de recherche précalculés et un système dorsal avec une ou plusieurs bases de données et une instance de validation, une requête est reçue au niveau de la plateforme de recherche de la part d'un client comprenant une ou plusieurs premières valeurs clés indiquant un premier enregistrement de données et au moins un premier résultat de recherche précalculé et un second résultat de recherche précalculé pour le premier enregistrement de données est extrait de la mémoire-cache. L'instance de validation évalue une validité actuelle du premier résultat de recherche précalculé et du deuxième résultat de recherche précalculé extrait de la mémoire-cache et renvoie le premier résultat de recherche précalculé au dispositif client, ou en réponse à l'évaluation selon laquelle le premier résultat de recherche précalculé est invalide et le deuxième résultat de recherche précalculé est valide, renvoie le deuxième résultat de recherche précalculé au client. Figure pour l’abrégé : Fig. 1
Description
The disclosure of the present invention generally relates to computers and computer software, and in particular to methods, systems, and computer program product that handle search queries in a database system and perform cache update adaptation.
Recent developments in database technology show that it is a common issue to ensure short response times to search queries which require processing large volumes of data. For example, such processing has to be performed in response to so-called "open queries" which contain only little input information (e.g., only one or two parameters out of a dozen possible parameters are specified and/or the specified value ranges of the parameters are broad) and, consequently, generally lead to a large number of results. Possibilities to speed up data processing by increasing hardware performance are limited. Thus, attention is drawn to improving the operating mechanisms underlying the processing of large data volumes.
One general approach to shorten response times to queries is to pre-compute or pre-collect results to search queries and maintain them in a cache. Search queries are then actually not processed on the large volumes of original data stored in data bases, but on the results as maintained in the cache.
Caching, however, has a drawback, namely that the results maintained in the cache may become outdated due to changes in the original data from which the results have been pre-computed or pre-collected. So, it is an issue to keep the pre-computed or pre-collected results up-to-date in order to ensure that queries responded by the results from the cache correctly reflect the corresponding underlying data stored in the databases. Keeping the results held in the cache up-to-date is a tradeoff between competing technical parameters such as computational and transmission load, computational speed, data availability, storage capacity (of both, cache and main memories) on one side and the required validity and completeness of data on the other side. So, strategies for keeping the cache up-to-date, that is keeping the results maintained in the cache up-to-date, are needed.
US 7,430,641 B2 provides a data storage system including a plurality of controllers and a cache memory connected or otherwise associated with one or more mass data storage devices. The controllers include a communication module and are associated with one or more mass data storage devices, wherein a first controller is adapted to cause a second controller to retrieve a data block associated with a logical unit by transmitting a signal to the second controller via their respective communication modules. The first controller is adapted to signal one or more other controllers whether it has received one or more data block requests. At least one prefetch decision module is adapted to trigger the retrieval of data blocks based on data block requests received by a controller with which it is associated and/or based on data block requests received by other controllers.
US 8,161,264 B2 describes a method for performing data prefetching using indirect addressing that includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block that includes data at the second memory address is then fetched. A data prefetch instruction may be indicated by a unique operational code, a unique extended opcode, or a field in an instruction.
It is an object of the present application to overcome at least some of the problems associated with the prior art.
According to a first aspect, a method for handling data in a distributed computing environment is provided. The distributed computing environment comprises a frontend system with a search platform having a cache of pre-computed search results and a backend system with one or more databases and a validation instance. The one or more databases store data records having a combination of at least one key parameter, wherein each key parameter has a key value out of a finite number of predefined key values. The cache hosts, for at least a part of the data records, at least two pre-computed search results out of a set of search results for a given data record which are computable based on the key value of the at least one key parameter of the given data record. The method comprises, at the search platform, receiving a request from a client device comprising one or more first key-values indicating a first data record. The method retrieves, in response to receiving the request, from the cache at least a first pre-computed search result and a second pre-computed search result for the first data record. The method evaluates, by inquiring the validation instance, a current validity of the first pre-computed search result and the second pre-computed search result retrieved from the cache. In response to evaluating that at least the first pre-computed search result is valid, the method returns the first pre-computed search result to the client device. In response to evaluating that the first pre-computed search result is invalid and the second pre-computed search result is valid, the method returns the second pre-computed search result to the client device.
In some embodiments, the set of search results comprises a set of ordered search results, wherein the set of ordered search results comprises at least a first-ranked search result and a second-ranked search result and the at least two pre-computed search results comprise at least the first-ranked search result and the second-ranked search result, wherein the first pre-computed search result is the first-ranked search result and the second pre-computed search result is the second-ranked search result from the set of ordered search results for the given data record.
In some embodiments, the search platform returns, in response to evaluating that the first pre-computed search result is invalid an indication that the first pre-computed search result is currently invalid.
In some embodiments, the search platform deletes the first pre-computed search result from the cache in response to evaluating that the first pre-computed search result is invalid, and/or deletes the second pre-computed search result from the cache in response to evaluating that the second pre-computed search result is invalid.
In some embodiments the search platform returns an invalidity indication to the client in response to evaluating that all pre-computed search results retrieved from the cache in response to receiving the request are invalid.
According to some embodiments, the evaluation of the validity of the first pre-computed search result and the second pre-computed search result comprises transmitting the key-value of the at least one key-parameter to the validation instance.
According to some embodiments, the frontend system comprises a cache manager, the method further comprising at the cache manager triggering a pre-computation of at least two pre-computed search results for a given data record in response to determining that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold.
In some embodiments, determining that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold comprises calculating an aging value given by
[Math. 1]
wherein t denotes a current time or the estimated time of receipt of the first and/or second pre-computed search result at the cache, C denotes an aging rate modelled by a probabilistic model and t0a timestamp indicating the time when the first and/or the second pre-computed search result was precomputed. The aging value is compared with a given threshold value determining that the first pre-computed search result and/or the second pre-computed search result is likely outdated if the aging value is below the given threshold value.
[Math. 1]
In some embodiments, the pre-computation is triggered in response to evaluating that the first pre-computed search result and/or the second pre-computed search result is invalid.
According to some embodiments, the pre-computation comprises indicating to the backend system that the first pre-computed search result and/or the second pre-computed search result is invalid and replacing the first pre-computed search result from the set of search results for the given data record and/or the second pre-computed search result from the set of search results for the given data record by further search results for the given data record.
According to still another aspect a computing machine is provided, the computing machine acting as a search platform for handling data in a distributed computing environment comprising a frontend system with a search platform having a cache, a backend system with one or more databases storing data records having a combination of at least one key parameter, wherein each key parameter has a key value out of a finite number of predefined key values, and a validation instance, the search platform being arranged to execute the method of any one of the aforementioned aspects.
According to still another aspect, a computer program is provided, the computer program product comprising program code instructions stored on a computer readable medium to execute the method steps according to any one of the aforementioned aspects, when said program is executed on a computer.
The present mechanisms will be described with reference to accompanying figures. Similar reference numbers generally indicate identical or functionally similar elements.
The subject disclosure generally pertains to handling queries in a distributed computing system as shown in FIG. 1. The distributed computing system comprises one or more clients 1, a frontend system 2 with a search platform 3 having a cache 4 with pre-computed or pre-collected search results stored therein and a backend system 5 with one or more databases 7 and a validation instance 6. Frontend system 2 may also comprise of a cache manager 4a. Clients 1, frontend system 2 and backend system 5 are located anywhere and are individual computing machines such as personal computers, mobile stations such as laptops or tablet computers, smartphones, and the like, as well, in some embodiments, more powerful machines such as database application servers, distributed database systems respectively comprising multiple interconnected machines, data centers, etc. In some embodiments, the frontend system 2 and/or the backend system 5 might be similar machines as the clients 1, while, in other embodiments, the frontend system 2 and/or the backend system 5 are more powerful than the clients 1. In one embodiment, the frontend system 2 and/or the backend system 5 and the clients 1 are data centers which may be worldwide distributed.
Frontend system 2, backend system 5 and the clients 1 may be constituted of several hardware machines depending on performance requirements. Both, frontend system 2, backend system 5 and clients 1, are embodied e.g. as stationary or mobile hardware machines comprising computing machines 100 as illustrated in FIG. 11 and/or as specialized systems such as embedded systems arranged for a particular technical purpose, and/or as software components running on a general or specialized computing hardware machine (such as a web server and web clients).
Frontend system 2, backend system 5 and the clients 1 are interconnected by the communication interfaces 8 and 9. Each of the interfaces 8 and 9 utilizes a wired or wireless Local Area Network (LAN) or a wireline or wireless Metropolitan Area Network (MAN) or a wire-line or wireless Wide Area Network (WAN) such as the Internet or a combination of the aforementioned network technologies and are implemented by any suitable communication and network protocols.
Database queries which are requested from client 1 over the communication interface 8 are received at search platform 3. Search platform 3 may implement standardized communication protocols across the layers of the OSI reference model. Amongst others, the search platform 3 may employ initial processing mechanisms such as error recognitions and corrections, packet assembly, as well as determination whether a valid database query has been received. Invalid messages may be discarded already by the search platform 3 for reasons of security and performance.
The cache 4 may be implemented as a further database (in addition to the one or more databases 7). In some embodiments, the cache 4 may also be a logical cache, i.e. the data of the cache 4 is held in respectively assigned areas of a memory of the hardware machine(s) which host(s) one or more databases 7. Cache 4 stores the pre-computed or pre-collected path combinations derived from the underlying data stored in one or more databases 7.
Backend system 5 comprises a validation instance 6 for determining the validity of data, e g. the pre-computed or pre-collected results stored in cache 4. Within the context of the example, validation instance 6 examined whether one or more path combinations are currently available.
The one or more databases 7 store data which is generally up-to-date and, thus, forms original or valid data. The one or more databases 7 may be equipped with an interface to update the data stored in the databases 7. This interface may be the same as the interface to receive and respond to database queries. The data stored by the databases 7 is continuously kept updated which means that any change of the data is actually effected in the databases 7, e.g. on an event-based or periodic basis. Hence, the databases 7 is either an original data source itself, such as an inventory database or a database maintaining any kind of original and generally valid results, or accesses one or more original data sources in order to store original results in identical (mirror) or processed form. If the databases 7 generates/computes/collects the original results by accessing other/further original data sources in order to prepare original results, the databases 7 provides results which generally accurately reflect the current content of the original response data.
The methods presented in the subsequent embodiments are applicable for all database systems and computing systems with a hardware or functional architecture as shown in FIG. 1 or with a similar architecture.
An example for pre-computed or pre-collected results as aforementioned in the preceding paragraphs are the possible path combinations connecting data centers (or nodes) in a distributed network as shown in FIG. 2. The network comprises of millions of data centers distributed worldwide and also comprise of millions of possible paths connecting these data centers. The total number of data centers and paths are fixed. The paths are associated, among others, with certain signal propagation delays and certain bandwidths. Within the context of this example, a data base exists comprising as stored data the available, worldwide distributed data centers, the currently existing paths between these data centers, the associated signal propagation delays and bandwidths and information whether a data center is currently in an operational state. Using this data as underlying data, path combinations connecting the worldwide distributed nodes are pre-calculated or pre-collected and stored in cache 4, which can be queried by client 1 intending to transmit data from one data center to another.
If, within the context of this example and FIG. 2, client 1 intends to transmit huge amounts of data, e. g. from data center 1, located at a place in North America to data center 2, located at a place in South East Asia, then the user usually wishes to choose a path combination which would transmit the data as fast and as efficient as possible. This could e. g. mean to keep the signal propagation delay as low as possible.
An appropriate cache-based search of all possible pre-computed or pre-collected paths for the transmission of data results e. g. in a path going over data center 3, located at a place in Europe, which provides the lowest signal propagation delay. Within the context of this example, however, a validation performed on this search results reveals that data center 3 is currently non-operational at the moment when the data ought to be transmitted. Therefore, the path going over data center 3 is not usable for the transmission of data. Therefore, the search has resulted in an invalid result.
The pre-computed or pre-collected path combinations stored in the cache 4 have to be re-computed in order to yield updated path combinations for the cache 4. Once the re-computation has been completed, a new search requested by client 1 results in a path combination connection data center 1 with data center 2 over data center 4, located in Africa. The signal propagation delay is higher, however, this path combination over data center 4 currently provides the best available option, since the more better path combination going over data center 3 is currently non-operational. Within this context, the cache 4 was re-computed and the search was repeated by client 1, resulting in an additional computational load and additional traffic load on the computational system.
The one or more databases 7 store data records having a combination of at least one key parameter, wherein each key parameter has a key value out of a finite number of predefined key values. The data records comprise of tables with underlying data from which possible global network connections can be pre-computed. Key-parameters comprise parameters such as "origin" and "destination", meaning the site of the data centers transmitting and receiving data respectively. Further key-parameters comprise e. g. signal propagation delays, bandwidth as well as any further parameters such as distance, latency etc., which within this context may be generally denoted as "Key-parameter 5", "Key-parameter 6" etc.
The key-values, taken from a finite number of predefined key-values may be e. g. for the key-parameters "origin" and "destination" taken from the range "Africa, Asia, Europe, North America, South America, Australia". The key-values for the key-parameter "Signal propagation delay (measured in arbitrary units)" may be taken out from the range between 10 and 1500. The key-values for the key-parameter "Bandwidth" may be taken out from the range between 1 and 10000, which within this example are also given in arbitrary units. In general, key-values for a key-parameter denoted e. g. as "key-parameter X" may be taken from e. g. a range "key-value X-1 to Key-value X-Y".
From the underlying data the possible worldwide path combinations connecting two data centers are pre-computed as search-results and stored in cache 4. As an example shown in FIG. 3 the possible path combinations between data center 1 located in North America and data center 2 located in Asia may be:
- path combination #1 going over data center 3 located in Europe
- path combination #31 going over data center 4 located in Africa
- path combination #5 going over data center 5 located in Australia
- path combination #2 going over data center 6 located in South America
- path combination #200 going over data center 3 located in Europe
- path combination #220 going over data center 4 located in Africa
- path combination #202 going over data center 2 located in Asia
- path combination #231 going over data center 1 located in North America
Each of the possible path combinations may have a specific signal propagation delay and – as an example for a further parameter - a specific bandwidth, which may serve as a basis for a possible ranking of the path combinations. These ranked path combinations may then form ranked pre-computed search results including a first-ranked search-result and a second-ranked search-result. FIG. 4 shows a table with a set of ranked and ordered search results, which may be a SQL-table hold by cache 4, comprising in total seven ranked path combinations taken from path combinations pre-computed earlier by search platform 3. The table displays in its columns the number and the rank of the path combinations, the individual signal propagation delays of the path combinations and the data center numbers over which the path combinations go (or are directed). The ranking has been based on signal propagation delays, wherein the first-ranked search result is the path combination #1 having the lowest signal propagation delay, which should be assigned a value '10' within this example. The second ranked search result may be the path combination #31 with the second lowest signal propagation delay, which should be assigned a value '11' within the current example. As shown in FIG. 4, the numbers of the path combinations do not have to coincide with the ranking. When inspecting FIG. 4, one finds out that e. g. path combination #31 has been ranked to the second place.
Cache 4 hosts, for at least a part of the data records, at least two pre-computed search results out of a set of search results for a given data record which are computable based on the key value of the at least one key parameter of the given data record. Within the current example, a pre-computed search result comprises of a path combination connecting two data centers, e. g. data center 1 located in North America and data center 2 located in South East Asia. The connection between these two data centers can be either directly or over at least one additional data center. All the possible connections between data center 1 and data center 2 form a set of search results, which within the context of the current example, are calculated based on at least one key-value of the key-parameter "Signal propagation delay", resulting in a value for the signal propagation delay, which may be used for a subsequent ranking of the search results, as shown in FIG.4. A first pre-computed search result may be a path combination #1 connecting data center 1 with data center 2 going over data center 3 located in Europe and possessing the lowest signal propagation delay with a value of '10'. A second pre-computed search result may be a path combination #31 connecting data center 1 with data center 2 going over data center 4 located in Africa possessing a higher signal propagation delay with a value of '11'. If the first search result is invalid since e. g. data center 3 in Europe is not operational, a client 1 sending a request to search platform 3 may prefer to receive instead of the first search result the second pre-computed search result, which is valid since data center 4 is operational, although the signal propagation delay may be higher.
A message sequence diagram for the processing of client requests at the search platform 3 is shown in FIG. 5 according to some embodiments. Search platform 3 receives in an activity 20 a request from a client device 1 comprising one or more first key-values indicating a first data record. A request can be, in some embodiments, formulated as request written in SQL-language and directed to a SQL-database (wherein SQL stands for structures query language). Within the context of the current example, the request is for a data connection line and the one or more first key-values comprise key-values related to the key-parameters "Origin" and "Destination", such as "North America" and "Asia" respectively. Search platform 3 retrieves, in response to receiving the request, in an activity 21 from the cache 4 at least a first pre-computed search result and a second pre-computed search result for the first data record. These pre-computed search-results may be in form of SQL-tables as shown in FIG. 4. To cite again the current example, the first pre-computed search result may be the aforementioned path combination #1 connecting data center 1 with data center 2 going over data center 3 located in Europe, possessing the lowest signal propagation delay with a value of '10', and the second pre-computed search result may be the aforementioned path combination #31 connecting data center 1 with data center 2 going over data center 4 located in Africa possessing higher signal propagation delay with a value of '11'.
In an activity 22, search platform 3 evaluates, by inquiring in an activity 23 the validation instance 6, a current validity of the first pre-computed search result and the second pre-computed search result retrieved from the cache. In response to inquiring the validation instance 6, search platform 3 receives in an activity 24 information concerning the validity of the corresponding search-results. The evaluation of a validity of a search-result may comprise the evaluation whether a record stored in a SQL-table as shown in FIG. 4 may still be valid. To stay within the current example, the evaluation may comprise to find out whether the data centers 3 and 4 are currently in an operational or non-operational state. If one of the data centers 3 and 4 are in a non-operational state, path combinations going over that data centers 3 and 4 are as well in a non-operational state meaning that it is currently impossible to transmit data over that path combinations. The corresponding validity evaluated by search platform 3 may therefore be valued '0', therefore rendering the corresponding search result invalid. On the other hand, when the path combination is operational and data can be transmitted over that path combination, the validity calculated by search platform may be '1', resulting in that the corresponding search result is rendered valid.
In addition to evaluating whether a search results is valid or invalid and returning respective indications to the search platform, in some embodiments, the validity instance 6 may also return updated key-values for one or more key-parameters for the pre-computed data records indicated by the search platform 3. This may result in a reordering of the ranking of the search results based on the one or more updated key-values, such as, e. g. the signal propagation delays of path combinations #1 and #31. For example, due to current high load on the section between data center 1 and data center 3, the validation instance 6 may return for path combination #1 a current much lower value than '10000' for the bandwidth, which would result in a new and lower ranking of the originally first-ranked path combination #1. In response to this, path combination #31 may be assigned the new rank of first-ranked search-result.
In response to evaluating that at least the first pre-computed search result is valid, search platform 3 returns in activity 25 the first pre-computed search result to the client device 1, or in response to evaluating that the first pre-computed search result is invalid and the second pre-computed search result is valid, search platform 3 returns the second pre-computed search result to the client device 1. Within some embodiments, the search-results may be a SQL-table. To remain within the context of the current example, search platform returns to client 1 the path combination going over data center 3 in the case data center 4 is in a non-operational state or search platform returns to client 1 the path combination going over data center 4 in the case data center 3 is in a non-operational state. These search-results may e. g. be returned as an SQL-table comprising of just one row.
The computation of a two-part answer to the request by client 1, i. e. of an alternative second search result in addition to the first search-result eliminates the problem of re-computing the entire cache in the case the first search result is invalid, therefore reducing the computational load of the entire computing system (frontend system, backend system) as well as the transmission load, since the number of requests for underlying data stored in databases serving as bases for the re-computation of search results is also reduced. This is contrary to the prior art described in the background section as well as in US 7,430,641 B2 and US 8,161,264 B2, which either present only one search result or does not address the presentation of search-results at all. In addition, client 1 receives the pre-computed search-results in response the request in a shorter time, since the second search result is already hold by cache 4 and therefore client 1 does not has to wait until an entire re-computation of cache 4 in order to yield the second search-result has been completed.
According to some embodiments, the set of search results comprising a set of ordered search results, wherein the set of ordered search results comprises at least a first-ranked search result and a second-ranked search result and the at least two pre-computed search results comprise at least the first-ranked search result and the second-ranked search result, wherein the first pre-computed search result is the first-ranked search result and the second pre-computed search result is the second-ranked search result from the set of ordered search results for the given data record. Cache 4 holds at least two pre-computed search results comprising at least the first-ranked search result and the second-ranked search result, in the case of the present example the first ranked path combination #1 and the second ranked path combination #31. The first pre-computed search result is path combination #1, which is simultaneously the first-ranked search result and the second pre-computed search result is path combination #31, which is the second-ranked search result from the set of ordered search results for the given data record, as shown in table in FIG. 4.
According to some embodiments, the search platform 3, returns, in response to evaluating that the first pre-computed search result is invalid an indication that the first pre-computed search result is currently invalid. The indication can comprise a message displayed on a suitable website such as a corporate website, a message sent by email, an instant messaging service, SMS etc. Staying within the current example, search platform 3 may return to client 1 a message that path combination over data center 3 is currently not in an operational state.
According to some embodiments, the search platform 3, deletes the first pre-computed search result from cache 4 in response to evaluating that the first pre-computed search result is invalid, and/or deletes the second pre-computed search result from cache 4 in response to evaluating that the second pre-computed search result is invalid. In the case of an SQL-table, this may be equivalent to deleting the first and/or the second row of the table. Citing again the current example, search platform 3 deletes in FIG. 6 path combination #1 in the case path combination #1, representing the connection between data center 1 and data center 2 over data center 3 located in Europe, is non-operational and therefore invalid (having a validity of '0'). In an analog case, if the connection between data center 1 and data center 2 over data center 4 in Africa is non-operational and therefore invalid, search platform 3 deletes path combination #31, as shown in FIG. 7.
According to some embodiments, search platform 3, returns an invalidity indication to client 1 in response to evaluating that all pre-computed search results retrieved from the cache in response to receiving the request are invalid. The invalidity indication can comprise a message displayed on a suitable website such as a corporate website, a message sent by email, an instant messaging service, SMS etc. If, within the context of the example, all seven path combinations listed in FIG. 5 are non-operational due to a failure of a node such as a data center, then to client 1 a message is returned that currently there is no connection possible between data center 1 in North America and data center 2 in South East Asia and therefore, no data can be transmitted at the moment.
According to some embodiments, the evaluation of the validity of the first pre-computed search result and the second pre-computed search result comprises transmitting the key-value of the at least one key-parameter to the validation instance. In the case of a SQL-table, a record stored in the table may be transmitted. Within the current example, search platform 3 transmits the key-values for the key-parameters "origin" and "destination", e. g. the key-values "North America" and "Asia", to the validation instance 6. At the validation instance 6, the possible path combinations connecting data center 1 and data center 2 are validated and, in the case that a path combination, e. g. path combination #1 going over data center 3 located in Europe, is not available for data transmission, that path combination gets assigned a validity of '0'.
In some embodiments and as aforementioned, frontend system 2 comprises of a cache manager 4a. In some further embodiments, the method further comprises at the cache manager 4a triggering a pre-computation of at least two pre-computed search results for a given data record in response to determining that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold. In the case of an SQL-table, at least two rows will be re-computed. Within the current example, in FIG. 8 a message sequence diagram for the population of cache 4 according with updated pre-computed search results is shown. In an activity 30, cache manager 4a triggers the pre-computation (or re-computation) of the possible world-wide path combinations after cache manager 4a has determined that the probability that the path combinations currently hold in cache 4 is outdated exceeds a given threshold. Computation platform 8 receives in an activity 31 a request to pre-compute (or re-compute) the path combinations. In order to perform this, computation platform 8 sends in an activity 32 a query to one or more databases 7 for the underlying data necessary to pre-compute the path combinations. In an activity 33, computation platform receives the corresponding data from the data records and in an activity 34, computation platform 8 generates the updated worldwide path-combinations connecting two data centers. In an activity 35, computation platform 8 sends to cache 4 the updated pre-computed path combinations.
According to some embodiments, determining by the cache manager 4a that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold comprises calculating an aging value given by
[Math. 1]
wherein t denotes a current time or the estimated time of receipt of the first and/or second pre-computed search result at the cache, C denotes an aging rate modelled by a probabilistic model and t0a timestamp indicating the time when the first and/or the second pre-computed search result was precomputed, e. g. the path combination #1 connecting data center 1 with data center 2 over data center 3 located in Europe and/or path combination #31 connecting data center 1 with data center 2 over data center 4 located in Africa respectively.
[Math. 1]
The aging rate C may be employed to provide an estimate of the probability for the pre-computed path combinations to stay valid after a given time. This is also referred as the probability of the path combinations being valid or, in other words, not being outdated. Two exemplary functions of this probable validity decreasing over time are depicted by FIG. 9. Function F represents path combinations which potentially remains more accurate (or, more correctly, stays at a higher probability of being valid over time) than another path combination associated with function G. For example, the path combinations represented by function F has a roughly 70% probability of being still valid at 35 hours after its last generation, while the other path combinations characterized by function G is only valid up to about 50% at 35 hours after its latest generation. The cache manager 4a compares the validity probability value for e. g. path combination #1 connecting data center 1 with data center 2 over data center 3 located in Europe or path combination #31 connecting data center 1 with data center 2 over data center 4 located in Africa with a given threshold value and determines that the requested data is likely invalid if the validity probability value is below the given threshold value.
Cache manager 4a compares the aging value with a given threshold value and determines that the first pre-computed search result and/or the second pre-computed search result, e. g. path combination #1 connecting data center 1 with data center 2 over data center 3 located in Europe or path combination #31 connecting data center 1 with data center 2 over data center 4 located in Africa, is likely outdated if the aging value is below the given threshold value.
According to some embodiments, the pre-computation is triggered in response to evaluating that the first pre-computed search result and/or the second pre-computed search result is invalid. To stay within the current example, if the evaluation at validation instance 6 results that path combination #1 representing the connection between data center 1 and data center 2 over data center 3 located in Europe, is non-operational and therefore invalid (having a validity of '0') and/or path combination #31 representing the connection between data center 1 and data center 2 over data center 4 in Africa is non-operational and therefore invalid (having a validity of '0'), cache manger 4a triggers in activity 30 of FIG. 8 the pre-computation the possible world-wide path combinations.
According to some embodiments, the pre-computation comprises indicating to the backend system that the first pre-computed search result and/or the second pre-computed search result is invalid and replacing the first pre-computed search result from the set of search results for the given data record and/or the second pre-computed search result from the set of search results for the given data record by further search results for the given data record. Taking again the case of an SQL-table, this may be equivalent to a reordering of the rows of the table. FIG. 10 shows for the present example the replacement of both, path combination #1, representing the connection between data center 1 and data center 2 over data center 3 located in Europe, and path combination #31, representing the connection between data center 1 and data center 2 over data center 4 located in Africa, with lower ranking path combinations. In FIG. 10, path combination #2 is placed in the position of the first-ranked search result and path combination #5 is placed in the combination of the second-ranked search-result.
Another example for the application of the methodologies described herein is navigation-related and refers to a computation of a route between two locations. If a car-driver wishes e. g. to know the shortest connection between Central Park in New York City and the J. F. Kennedy Airport, a request sent from his mobile phone may show him the shortest connection. In line with the methodologies described above, the cache 4 of the search platform 3 keeps a number of pre-computed search results which may be ranked in the order of distance between the two locations, while the validation instance 6 keeps current information about traffic jams, road works, etc., and may e.g. have the information that the shortest connection is currently not useable due to road maintenance work. In response to the request, the driver is provided not only with the current information that the shortest connection is currently not usable (meaning that this first-ranked search result is "invalid"). The driver now is also provided with the additional information which alternative route can be currently used, i.e. e.g. the second-ranked route kept by the cache 4. This alternative route may be slightly longer in distance but is currently free, since there is e. g. no roadwork going on or there are no traffic jams which is determined by the validation instance 6 that assesses the second-ranked route to be valid. Therefore, the second-ranked search result – the alternative route – is returned and presented to the driver. In another situation, the shortest connection may still be usable, i. e. not blocked and thereby not invalid, but loaded with a heavy traffic jam. For example, the validation instance 6 returns a time for completing this shortest connection which may be longer than the time needed to pass the alternative route according to the second-ranked search result. The search platform 3 may then return both, the first-ranked search result and the second-ranked search result, but also indicate to the driver's device that the first-ranked result is currently actually slower than the second-ranked result.
FIG. 11 is a diagrammatic representation of the internal component of a computing machine of the client 1, frontend system 2, backend system 5, computation platform 8 or databases. The computing machine 100 includes a set of instructions to cause the computing machine 100 to perform any of the methodologies discussed herein when executed by the computing machine 100. The computing machine 100 includes at least one processor 101, a main memory 106 and a network interface device 103 which communicate with each other via a bus 104. Optionally, the computing machine 100 may further include a static memory 105 and a disk-drive unit. A video display, an alpha-numeric input device and a cursor control device may be provided as examples of user interface 102. The network interface device 103 connects the computing machine 100 to the other components of the distributed computing system such as the clients 1, frontend system 2, backend system 5 or further components such as databases.
Computing machine 100 also hosts the cache 107. The 107 may store the received database tables also in a cache. The cache 107 within the present embodiments may be composed of hardware and software components that store the database tables so that future requests for the database tables can be served faster than without caching. There can be hardware-based caches such as CPU caches, GPU caches, digital signal processors and translation lookaside buffers, as well as software-based caches such as page caches, web caches (Hypertext Transfer Protocol, HTTP, caches) etc. Client 1, frontend system 2, backend system 5, computation platform 8 or databases may comprise of a cache 107. Computation platform 8 starts data processing such as decoding the received database tables, elimination of errors residing in the database tables by removing e, g, redundant data sets from the database tables or data sets with missing entries. Furthermore, the database tables are brought into a common data format to ease further processing.
A set of computer-executable instructions (i.e., computer program code) embodying any one, or all, of the methodologies described herein, resides completely, or at least partially, in or on a machine-readable medium, e.g., the main memory 106. Main memory 106 hosts computer program code for functional entities such as database request processing 108 which includes the functionality to receive and process database requests and data processing functionality 109. The instructions may further be transmitted or received as a propagated signal via the Internet through the network interface device 103 or via the user interface 102. Communication within computing machine is performed via bus 104. Basic operation of the computing machine 100 is controlled by an operating system which is also located in the main memory 106, the at least one processor 101 and/or the static memory 105.
In general, the routines executed to implement the embodiments, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, may be referred to herein as "computer program code" or simply "program code". Program code typically comprises computer-readable instructions that are resident at various times in various memory and storage devices in a computer and that, when read and executed by one or more processors in a computer, cause that computer to perform the operations necessary to execute operations and/or elements embodying the various aspects of the embodiments of the invention. Computer-readable program instructions for carrying out operations of the embodiments of the invention may be, for example, assembly language or either source code or object code written in any combination of one or more programming languages.
Claims (11)
- A method for handling data in a distributed computing environment comprising:
a frontend system with a search platform having a cache of pre-computed search results and
a backend system with one or more databases and a validation instance,
wherein the one or more databases store data records having a combination of at least one key parameter, wherein each key parameter has a key value out of a finite number of predefined key values,
wherein the cache hosts, for at least a part of the data records, at least two pre-computed search results out of a set of search results for a given data record which are computable based on the key value of the at least one key parameter of the given data record,
the method comprising, at the search platform:
receiving a request from a client device comprising one or more first key-values indicating a first data record,
retrieving, in response to receiving the request, from the cache at least a first pre-computed search result and a second pre-computed search result for the first data record,
evaluating, by inquiring the validation instance, a current validity of the first pre-computed search result and the second pre-computed search result retrieved from the cache,
in response to evaluating that at least the first pre-computed search result is valid, returning the first pre-computed search result to the client device, or
in response to evaluating that the first pre-computed search result is invalid and the second pre-computed search result is valid, returning the second pre-computed search result to the client device. - The method according to claim 1, wherein the set of search results comprising a set of ordered search results, wherein the set of ordered search results comprises at least a first-ranked search result and a second-ranked search result and the at least two pre-computed search results comprise at least the first-ranked search result and the second-ranked search result, wherein the first pre-computed search result is the first-ranked search result and the second pre-computed search result is the second-ranked search result from the set of ordered search results for the given data record.
- The method according to claim 1 or claim 2, further comprising, at the search platform, returning, in response to evaluating that the first pre-computed search result is invalid an indication that the first pre-computed search result is currently invalid.
- The method of any one of claims 1 to 3, further comprising, at the search platform,
deleting the first pre-computed search result from the cache in response to evaluating that the first pre-computed search result is invalid, and/or
deleting the second pre-computed search result from the cache in response to evaluating that the second pre-computed search result is invalid. - The method of any one of claims 1 to 4, further comprising, at the search platform,
returning an invalidity indication to the client in response to evaluating that all pre-computed search results retrieved from the cache in response to receiving the request are invalid. - The method according to any one of claims 1 to 5, wherein the evaluation of the validity of the first pre-computed search result and the second pre-computed search result comprises transmitting the key-value of the at least one key-parameter to the validation instance.
- The method according to any one of claims 1 to 6, wherein the frontend system comprises a cache manager, the method further comprising, at the cache manager,
triggering a pre-computation of at least two pre-computed search results for a given data record in response to determining that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold. - The method according to claim 7, wherein determining that a probability that at least the first pre-computed search result or the second pre-computed search result stored at the cache is outdated exceeds a given threshold comprises:
calculating an aging value given by
[Math. 1]
wherein t denotes a current time or the estimated time of receipt of the first and/or second pre-computed search result at the cache, C denotes an aging rate modelled by a probabilistic model and t0a timestamp indicating the time when the first and/or the second pre-computed search result was precomputed, and
comparing the aging value with a given threshold value determining that the first pre-computed search result and/or the second pre-computed search result is likely outdated if the aging value is below the given threshold value. - The method according to claim 9, wherein the pre-computation comprises:
indicating to the backend system that the first pre-computed search result and/or the second pre-computed search result is invalid, and
replacing the first pre-computed search result from the set of search results for the given data record and/or the second pre-computed search result from the set of search results for the given data record by further search results for the given data record. - A computing machine acting as a search platform for handling data in a distributed computing environment comprising a frontend system with a search platform having a cache, a backend system with one or more databases storing data records having a combination of at least one key parameter, wherein each key parameter has a key value out of a finite number of predefined key values, and a validation instance, the search platform being arranged to execute the method of any one of claims 1 to 10.
- A computer program product comprising program code instructions stored on a computer readable medium to execute the method steps according to any one of the claims 1 to 10 when said program is executed on a computer.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1909794A FR3100630B1 (fr) | 2019-09-05 | 2019-09-05 | Validation de la mise en mémoire-cache étendue et du temps de requête |
US17/005,863 US11762854B2 (en) | 2019-09-05 | 2020-08-28 | Extended caching and query-time validation |
EP20194518.5A EP3789878A1 (fr) | 2019-09-05 | 2020-09-04 | Mise en antémémoire étendue et validation des temps de recherche |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1909794 | 2019-09-05 | ||
FR1909794A FR3100630B1 (fr) | 2019-09-05 | 2019-09-05 | Validation de la mise en mémoire-cache étendue et du temps de requête |
Publications (2)
Publication Number | Publication Date |
---|---|
FR3100630A1 true FR3100630A1 (fr) | 2021-03-12 |
FR3100630B1 FR3100630B1 (fr) | 2021-10-29 |
Family
ID=69024361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
FR1909794A Active FR3100630B1 (fr) | 2019-09-05 | 2019-09-05 | Validation de la mise en mémoire-cache étendue et du temps de requête |
Country Status (3)
Country | Link |
---|---|
US (1) | US11762854B2 (fr) |
EP (1) | EP3789878A1 (fr) |
FR (1) | FR3100630B1 (fr) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001046865A1 (fr) * | 1999-12-23 | 2001-06-28 | Bull Hn Information Systems Inc. | Procede et appareil permettant d'ameliorer le rendement d'une operation de recherche en antememoire de code genere au moyen de valeurs de cle statiques |
US7430641B2 (en) | 2004-08-09 | 2008-09-30 | Xiv Ltd. | System method and circuit for retrieving into cache data from one or more mass data storage devices |
US8161264B2 (en) | 2008-02-01 | 2012-04-17 | International Business Machines Corporation | Techniques for data prefetching using indirect addressing with offset |
EP2908255A1 (fr) * | 2014-02-13 | 2015-08-19 | Amadeus S.A.S. | Augmenter la validité de résultat de recherche |
EP2911070B1 (fr) * | 2014-02-19 | 2016-10-19 | Amadeus S.A.S. | Validité à long terme de résultats de demande précalculés |
US20170344484A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Reducing latency by caching derived data at an edge server |
FR3077146A1 (fr) * | 2018-01-25 | 2019-07-26 | Amadeus S.A.S. | Recalcul de resultats precalcules d'interrogation |
WO2019145480A1 (fr) * | 2018-01-25 | 2019-08-01 | Amadeus S.A.S. | Recalcul de résultats d'interrogation pré-calculés |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101467557B1 (ko) * | 2007-05-02 | 2014-12-10 | 엘지전자 주식회사 | 주행 경로 선택 |
GB2504634B (en) * | 2010-11-22 | 2014-04-09 | Seven Networks Inc | Aligning data transfer to optimize connections established for transmission over a wireless network |
US9262513B2 (en) * | 2011-06-24 | 2016-02-16 | Alibaba Group Holding Limited | Search method and apparatus |
US20160171008A1 (en) * | 2012-08-14 | 2016-06-16 | Amadeus S.A.S. | Updating cached database query results |
KR101375219B1 (ko) * | 2012-09-07 | 2014-03-20 | 록앤올 주식회사 | 교통량 변화를 감지하여 경로를 탐색하는 통신형 내비게이션 시스템 |
US8792638B2 (en) * | 2012-11-28 | 2014-07-29 | Sap Ag | Method to verify that a user has made an external copy of a cryptographic key |
US20160125029A1 (en) * | 2014-10-31 | 2016-05-05 | InsightSoftware.com International | Intelligent caching for enterprise resource planning reporting |
US9647925B2 (en) * | 2014-11-05 | 2017-05-09 | Huawei Technologies Co., Ltd. | System and method for data path validation and verification |
US9489306B1 (en) * | 2015-05-20 | 2016-11-08 | Apigee Corporation | Performing efficient cache invalidation |
US20160379168A1 (en) * | 2015-06-29 | 2016-12-29 | Sap Se | Optimized Capacity Matching for Transport Logistics Accounting for Individual Transport Schedules |
EP3607273A1 (fr) * | 2017-06-02 | 2020-02-12 | Apple Inc. | Fourniture d'un guidage de navigation lumineux |
-
2019
- 2019-09-05 FR FR1909794A patent/FR3100630B1/fr active Active
-
2020
- 2020-08-28 US US17/005,863 patent/US11762854B2/en active Active
- 2020-09-04 EP EP20194518.5A patent/EP3789878A1/fr active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001046865A1 (fr) * | 1999-12-23 | 2001-06-28 | Bull Hn Information Systems Inc. | Procede et appareil permettant d'ameliorer le rendement d'une operation de recherche en antememoire de code genere au moyen de valeurs de cle statiques |
US7430641B2 (en) | 2004-08-09 | 2008-09-30 | Xiv Ltd. | System method and circuit for retrieving into cache data from one or more mass data storage devices |
US8161264B2 (en) | 2008-02-01 | 2012-04-17 | International Business Machines Corporation | Techniques for data prefetching using indirect addressing with offset |
EP2908255A1 (fr) * | 2014-02-13 | 2015-08-19 | Amadeus S.A.S. | Augmenter la validité de résultat de recherche |
EP2911070B1 (fr) * | 2014-02-19 | 2016-10-19 | Amadeus S.A.S. | Validité à long terme de résultats de demande précalculés |
US20170344484A1 (en) * | 2016-05-31 | 2017-11-30 | Salesforce.Com, Inc. | Reducing latency by caching derived data at an edge server |
FR3077146A1 (fr) * | 2018-01-25 | 2019-07-26 | Amadeus S.A.S. | Recalcul de resultats precalcules d'interrogation |
WO2019145480A1 (fr) * | 2018-01-25 | 2019-08-01 | Amadeus S.A.S. | Recalcul de résultats d'interrogation pré-calculés |
Also Published As
Publication number | Publication date |
---|---|
US20210073228A1 (en) | 2021-03-11 |
US11762854B2 (en) | 2023-09-19 |
FR3100630B1 (fr) | 2021-10-29 |
EP3789878A1 (fr) | 2021-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10896285B2 (en) | Predicting user navigation events | |
US6754800B2 (en) | Methods and apparatus for implementing host-based object storage schemes | |
US20110173177A1 (en) | Sightful cache: efficient invalidation for search engine caching | |
EP2727011B1 (fr) | Prédiction d'événements de navigation utilisateur | |
US9245004B1 (en) | Predicted query generation from partial search query input | |
US7467131B1 (en) | Method and system for query data caching and optimization in a search engine system | |
KR100330576B1 (ko) | 컴퓨터네트워크로부터월드와이드웹상의페이지위치을파악하고문서위치를파악하는시스템및방법 | |
US6944634B2 (en) | File caching method and apparatus | |
JPH1091638A (ja) | 検索システム | |
CA2414785A1 (fr) | Systeme tampon a multiples niveaux | |
CN113906407A (zh) | 用于数据库变化流的缓存技术 | |
JP5272428B2 (ja) | アクセス頻度の高い情報を事前にキャッシュする予測型キャッシュ方法、そのシステム及びそのプログラム | |
JP5322019B2 (ja) | 関連する情報を事前にキャッシュする予測型キャッシュ方法、そのシステム及びそのプログラム | |
US20180300248A1 (en) | Method and device for optimization of data caching | |
US11762854B2 (en) | Extended caching and query-time validation | |
CN113490933A (zh) | 分布式数据处理 | |
JPH05143435A (ja) | データベースシステム | |
EP4009188A1 (fr) | Traitement de requêtes de recherche | |
CN107679093B (zh) | 一种数据查询方法及装置 | |
CN116547658B (en) | Processing search requests | |
CN117520377A (zh) | ElasticSearch深度分页查询方法、装置和电子设备 | |
CN116547658A (zh) | 处理搜索请求 | |
JP2004070957A (ja) | 検索システム | |
CN118349488A (zh) | 一种缓存管理方法、装置、电子设备及存储介质 | |
KR100511164B1 (ko) | 웹 검색엔진에서의 실시간 사용자 질의 분석에 기반한and 연산용 색인데이터의 캐슁 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PLFP | Fee payment |
Year of fee payment: 2 |
|
PLSC | Publication of the preliminary search report |
Effective date: 20210312 |
|
PLFP | Fee payment |
Year of fee payment: 3 |
|
PLFP | Fee payment |
Year of fee payment: 4 |
|
PLFP | Fee payment |
Year of fee payment: 5 |
|
PLFP | Fee payment |
Year of fee payment: 6 |