WO2024123293A1 - Artificial intelligence-powered rapid search result display system with adaptation to user habits - Google Patents

Artificial intelligence-powered rapid search result display system with adaptation to user habits Download PDF

Info

Publication number
WO2024123293A1
WO2024123293A1 PCT/TR2023/051459 TR2023051459W WO2024123293A1 WO 2024123293 A1 WO2024123293 A1 WO 2024123293A1 TR 2023051459 W TR2023051459 W TR 2023051459W WO 2024123293 A1 WO2024123293 A1 WO 2024123293A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
cache
user
program
trigger
Prior art date
Application number
PCT/TR2023/051459
Other languages
French (fr)
Inventor
Yasemin ŞAHİN DOĞAN
İlter Tolga DOĞAN
Original Assignee
E-Kali̇te Yazilim Donanim Mühendi̇sli̇k Tasarim Ve İnternet Hi̇zmetleri̇ Sanayi̇ Ti̇caret Li̇mi̇ted Şi̇rketi̇
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by E-Kali̇te Yazilim Donanim Mühendi̇sli̇k Tasarim Ve İnternet Hi̇zmetleri̇ Sanayi̇ Ti̇caret Li̇mi̇ted Şi̇rketi̇ filed Critical E-Kali̇te Yazilim Donanim Mühendi̇sli̇k Tasarim Ve İnternet Hi̇zmetleri̇ Sanayi̇ Ti̇caret Li̇mi̇ted Şi̇rketi̇
Publication of WO2024123293A1 publication Critical patent/WO2024123293A1/en

Links

Definitions

  • the invention relates to a program equipped with software that allows for the utilization of low-specification hardware, enabling search results to be retrieved faster from memory without accessing the hard disk when the same results are searched for the second time or subsequently. It also pertains to a method involving the use of this program, which facilitates the background refreshing of these results.
  • the workstation and/or remote server can often determine that the page has not been modified or that only parts of the data in local storage have been altered, instead of adding load to the network lines.
  • Most web browser programs include the local caching of accessed web pages. While these caches are widely used and accepted, they have limited applications. Generally, each of these caches uses a local data storage drive accessible from a particular workstation. Each workstation can be equipped with such a cache, but the cache of one workstation is generally not accessible from another. Accordingly, even if a web page has been previously accessed by other workstations in the local area network, a workstation that has not previously accessed this page cannot retrieve it from the caches of other workstations. Network bandwidth is effectively wasted in operational environments where it is likely that each of the workstations will continuously access the same web page or pages repeatedly. Additionally, more processor and hard disk load on server resources are also re-expended with each client request.
  • the mentioned systems are most critically needed in various sectors, particularly in healthcare, machinery, communications, telecommunications, and information systems.
  • the necessity to manage large volumes of data and to deliver services to customers rapidly can be of vital importance in these fields.
  • the use of big data and the requirement for high-specification hardware to quickly process and visually present data are evident.
  • Organizing data based on the frequency of use and ensuring the desired data reaches the user involves significant investments of time and labor.
  • a type of memory-based EMS system rule engine is described.
  • This engine specifically uses a cache-based rule engine, Java language map structure for storage, and a map structure that operates based on keys and values. Access to data is faster than from disk, with no network transmission and delay. Once a decision needs to be made, actions can be triggered and executed in a timely manner, effectively enhancing the overall system working efficiency and ensuring overall operational safety (such as timely alarms, timely notifications). This can help businesses reduce costs and increase efficiency.
  • the mentioned rules are created manually, and there is no algorithm or method to adapt to continuous data input and output. The requirement for substantial memory usage and the accessibility of information based on specific classification rules does not solve all the problems mentioned previously.
  • the Chinese document with publication number CN114328779A proposes a cloud-based geographical information cloud disk aimed at efficiently acquiring and browsing data, including the storage of geographical information data in the cloud.
  • This proposal targets the characteristics of large-scale diversification, rapid circulation, strong timing, and the high value of spatial big data. Even if geographical information undergoes certain changes, these changes are not conducive to high-frequency data entry.
  • the ability to access data from the cloud system can lead to security issues and necessitates a specific server requirement for data storage in the cloud. Continuous use of the system inevitably leads to significant heating of these servers and a constant need for maintenance. This situation can result in negative consequences both financially and in terms of environmental sensitivity.
  • the U.S. document with publication number US20210209020A1 describes a method for ensuring the targeted caching of data.
  • the method includes receiving a user input containing an account identifier; automatically identifying a subset of data previously accessed by the user using an activity log and the account identifier; generating a copy of the data subset; associating the copy with the data subset by linking the copy to the data subset; and storing the copy in a temporary file storage.
  • the Chinese document with publication number CN106055690B describes a method for rapid retrieval and acquisition of data characteristics based on attribute matching.
  • the method comprises steps S1 , establishing an attribute matching model; and S2, rapid access steps based on the attribute matching model. It utilizes fast processing, rapid memory selection, and a multi-tiered caching technology. Using this method, a matching result is quickly obtained, and the reusability of the matching result is enhanced.
  • a memory database is introduced to perform the caching of retrieval data, and memory data is used to perform the calculation of an intermediate result, thus shortening the bottleneck of traditional retrieval methods from a hard disk perspective and improving data output speed.
  • To create a cache system for retrieval data must be stored in memory, and as data input increases, so does the required memory. Consequently, the physical space needed for rapid access to relevant data increases, and the document does not describe the system’s adaptability to frequent use of data. That is, the speed of access to frequently versus infrequently used data depends on the choice of memory where the data is stored, necessitating prior classification and organization of data, which leads to increased costs and labor.
  • the USPTO document with publication number US9465631 B2 describes an automatic caching system that enables faster computation of dependent results and automatically identifies user-relevant points to be incrementally cached, which are costly to obtain.
  • the system intelligently chooses between locally caching data and sending computation to a remote location co-located with the data, thus accelerating the computation of results.
  • the automatic caching system uses stable keys uniquely referring to programmatic identifiers; adds annotations to programs before execution with additional code using keys to associate and cache interim programmatic results; and can maintain the cache in a separate process or even a separate machine to allow the cached results to outlive program execution and be used by subsequent executions. It uses cost estimations to decide whether using cached values or remote execution will result in faster computation of results.
  • the operation of the system requires both a powerful server and high-specification hardware due to the large amount of cached data.
  • the acceleration of processes is prioritized based on the benefits provided by predetermined input types. This classification and ordering must be transmitted to artificial intelligence either by simultaneously using different programs, manually creating these classification inputs, manually checking them, or employing another Al-equipped program to understand the benefit of controlling the program.
  • the EPO document with publication number EP3062482A1 describes a method and system for high-speed wireless data reception from a caching device to a portable device.
  • the explained methods and systems talk about a general method and/or system for end-to-end transfer from a content provider to a portable user device, which can be integrated with combinable technology.
  • high-capacity data storage areas can be used, however, the limits of the caching device are not sufficient compared to the desired limits, and there is no program to organize the data and to determine which data needs to be delivered faster.
  • the PCT document with publication number W02015101827A1 describes a system, method, and apparatus for a main memory (MM) and a configurable auxiliary processor (CP) chip to process a subset of network functions.
  • MM main memory
  • CP configurable auxiliary processor
  • the U.S. document with publication number US20060179123A1 describes a method for faster access to frequently updated data using a web group to automatically download from a remote server.
  • the web group stores data in a cache that can be accessed from any of numerous browser-equipped workstations. These workstations are connected via a communication network to a web farm, which includes one or more local servers and associated data storage devices.
  • the cache used must be able to store large amounts of data, and the servers must continuously work to organize new data. This can lead to physical heating and the need for constant maintenance of the servers, as the data is redundantly stored on them, and any failure could cause a delay in the system switching to the backup and slower access to critical data. Additionally, a physically large space is required to store the data storage hardware. Increasing the number of local servers could reduce the need for physical storage space but also somewhat hinders the distribution of data and the rapid accessibility of important information. An increase in the number of servers can slow down data access because an online data detection system is necessary.
  • the present invention aims to eliminate the problems mentioned above and to achieve a technical innovation in the related field.
  • the main objective of the invention is to enable quick access to search results or necessary information through its software.
  • Another objective of the invention is to minimize wear that can occur during use by reducing the usage duration of the hard disk, which is the most physically deteriorated component in big-data systems, and to allow maintenance to be performed over a broader time frame.
  • a further objective of the invention is to reduce the usage duration of the hard disk and processor in big-data systems, thereby preventing heat and energy loss as well as noise pollution.
  • the current invention relates to a data display method involving a cache with search parameters.
  • This method aims to accelerate the display of relevant data on the user interface used by the user, shorten the loading times of data inputs and outputs during operations, and reduce the frequency of use of servers and hardware that store and/or transmit data. This is characterized by:
  • the invention characterizes a method that allows for user or user group-specific classification within the mentioned trigger control, including:
  • Another preferred embodiment of the invention is a method that checks whether the cache is in active or passive state for retrieving desired data from the cache, and if the trigger is not set, checks whether the relevant data has been cached.
  • Another preferred embodiment is a method where, if the cache is set when the data is retrieved from the database, the retrieved data is written to the cache.
  • Another preferred embodiment is a method that checks whether the criteria for caching the relevant data are met if the caching feature is active.
  • Another preferred embodiment is a method that, if the trigger is set, checks the date of the cache entry and retrieves the output from the cache according to its dynamic date range compatibility.
  • Another preferred embodiment is a method where, if the cache is set and the trigger is set, but the cache entry date does not match the dynamic date range, the relevant data is retrieved from the database.
  • the invention is a data display method involving a cache with search parameters, and accordingly, for setting conditions for search data, involves at least one user or user group: a) Using an interface designed to enable the entry of data into the mentioned program, b) Introducing cache suitability criteria to the program to allow the user to decide whether the data is worth caching, c) Determining and entering into the program the duration and frequency of refreshing the dynamic date range that includes program usage frequency to assist in the program's decision to retrieve output from the cache, d) Storing in physical hardware used as a cache the data deemed suitable for output from the cache to reach the user, e) Using physical hardware as a database for data deemed unsuitable for retrieval from the cache until they become suitable.
  • a preferred embodiment of the invention includes a method step of classifying and introducing data to the program to prevent unwanted data from being recorded in the cache before step b).
  • Another preferred embodiment of the invention is the customization of the caching criteria mentioned in method step b) for a specific user or user group.
  • Figure 1 provides the algorithm of the program possessed by the invention. Details not necessary for understanding the present invention may have been omitted. Besides, elements that are at least substantially identical, or at least have substantially identical functions, are shown.
  • the invention relates to a program and associated method that enable quick access to frequently used data for users.
  • Figure 1 presents the algorithm of the program used.
  • Data outputted from the cache enables fast access protected by the invention, whereas data outputted from the database is prior art. Accordingly, when a user wants to access data, it is necessary to know whether the desired data is in the cache. If the trigger is set, the condition for data coming from the cache can be time-dependent and considered invalid if cached before the time set by the trigger. For the trigger to function, certain information must be defined in the program. These include criteria for retrieving information from the cache, dynamic date range, the concept of "newness,” and specifically data that should be directly retrieved from or not retrieved from the cache. The concept of "new” or “fresh” is introduced to the program, determining whether the desired data in the cache is new or not.
  • the date range is set to a maximum of 30 days and introduced to the program, it will consider the data new and quickly retrieve it from the cache if the data has been cached within the last 30 days; otherwise, it will deem the data old and proceed to the step of querying whether the caching of the data is active. If the data is not new and not set for caching, the output regarding the data will be retrieved from the database until caching is enabled for the data.
  • the program's required date range for classifying the data as new is dynamic and programmable. For example, if the freshness period is set to 5 days, the program can count back 5 days from the current date at 12:00 PM, 120 hours, or 432000 seconds to control the period that needs to pass for the data to be considered old. Dynamic date setting prevents old data from being unnecessarily stored in the cache, making room for new data entries. Data older than the set dynamic date range is cleared during cache refreshing.
  • Cache refreshing occurs under the control of the triggering system, which is a key innovation of the invention.
  • cache refreshing the suitability of data in the cache to criteria is checked, and data deemed not worth storing in the cache is removed.
  • the frequency of data refreshing is determined by advancing the oldest date in the dynamic date range for each refresh, giving the date range a dynamic property.
  • the frequency of data refreshing is introduced to the program for automatic data refreshing in the cache.
  • the criteria controlled in data refreshing in a preferred embodiment are:
  • the minimum number of different users as part of the criteria for caching data can also be defined, allowing data to be considered for caching based on the minimum number of different users, even if it does not reach the total number of general user usages. This prevents the cache from unnecessarily storing data repeatedly searched by the same users. For instance, if the maximum period considered new is 50 days and the caching criteria are as follows:
  • the data it is not sufficient for the data to be used at least 250 times by total users; at least 10 individuals must have used the data.
  • both the maximum determined time for the data to have been accessed and the minimum number of users are essential. That is, according to the example above, both criteria 1 and 2 are not linked with "or,” and criterion 2 is considered essential. Therefore, for the data to be cached, it must have been requested within a maximum of 50 days and by at least 10 users. Thus, two users requesting the data at least 125 times each within 50 days is not sufficient.
  • a set cache means that the cache is usable, i.e. , writing data to and retrieving data from the cache is open. Therefore, according to the algorithm given in Figure 1 , if the cache is not set when requesting the data, the output will always be retrieved from the database. Even if the cache is set, if the data is not fresh according to the new trigger, the data will be retrieved from the database, not the cache. Data that has not been written to the cache can never be retrieved from the cache. The answer to whether the data is cached is queried by the program as whether the data is in the cache or not.
  • the cache If it is in the cache, it is retrieved from the cache; if not, it is retrieved from the database and written to the cache since the cache was previously set. Thus, the next time the same data is requested, if the previously mentioned conditions are met, the data will be retrieved from the cache, not the database. For example, if data is requested for the first time and the cache is not set, it will be retrieved from the database. If the same data is requested again and the cache is not open, it will again be retrieved from the database. The same will be true even if the data is requested repeatedly. Once the data is retrieved from the database, it will continually check whether the cache is open, i.e. , whether the cache is set. When the cache is opened, the cache will be set, and the data will be written to the cache. Thus, when data is searched for the first time and the cache is closed, it will be retrieved from the database and written to the cache once the cache is opened.
  • the program When the cache is open, i.e., the cache is set, and the data is searched for the first time, the program will query whether the trigger is set. Since the trigger is the part where the criteria for retrieving data from the cache are examined, and the data is searched for the first time, the answer will be no. The next question will be whether the data is cached, i.e., whether it has been previously written to the cache. Since it is the first search, the answer will be no, and the relevant data will be retrieved from the database. Then, the program will query again whether the cache is set, and as mentioned at the beginning of the paragraph, since the answer is yes, the data will be written to the cache.
  • the trigger controls when the cache is invalid, i.e., it ensures that data before a certain date is not retrieved from the cache.
  • the novelty of the invention depends on the trigger being set, and both the caching criteria and the dynamic date range are controlled by the trigger.
  • the trigger activates the rapid retrieval of necessary data from the cache. If the trigger is set, the program checks the data's caching criteria and decides on its compliance with the date ranges. If the answer to whether the trigger is set is yes, it is decided whether the triggered data fits the dynamic date range introduced to the program previously. If the triggered data is fresh (new) according to time, it is retrieved from the cache, and the data is quickly presented.
  • the invention enables an increase in the efficiency of hardware usage in the real world and allows hardware, which would otherwise perform cache operations slowly, to work much faster compared to conventional systems. This enables cost reduction by using lower-cost hardware.
  • the invention increases the time between two maintenance periods, allowing for longer hardware life.
  • Another physical impact of the invention is reducing heat emissions from servers without any extra cooling system, thereby reducing heat pollution to the environment and providing an improvement against hardware problems caused by heat. Additionally, reducing the frequency of use of hardware or not needing to operate at maximum performance can also prevent noise pollution. This is also important for the health of people working in proximity to hardware affected by heat and noise pollution.
  • the invention is a method containing a program that enables quick access to data, and in this context, it involves method steps of:
  • RAM random access memory
  • database for storing data upon their entry into the program

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Artificial Intelligence-Powered Rapid Search Result Display System with Adaptation to User Habits The invention relates to a data display method that includes a program designed to accelerate the display of relevant data on the user interface used by the user, shorten the loading times of data inputs and outputs during operations, and reduce the frequency of use of servers and hardware that store and/or transmit data.

Description

Artificial Intelligence-Powered Rapid Search Result Display System with Adaptation to User Habits
TECHNICAL FIELD
The invention relates to a program equipped with software that allows for the utilization of low-specification hardware, enabling search results to be retrieved faster from memory without accessing the hard disk when the same results are searched for the second time or subsequently. It also pertains to a method involving the use of this program, which facilitates the background refreshing of these results.
PRIOR ART
The current explosion in information access is well-known. The increased amount of data provided by systems has significantly augmented the data-receiving burden of the average user to the extent that it challenges the limits of existing equipment. As a result, issues related to usable bandwidth, server load, and overall network traffic may arise. Individuals accessing information using systems are familiar with these limitations, even while using connections with relatively high bandwidth. A partial solution to these problems has been the use of a cache provided on the user's workstation side. For example, when a web page is first downloaded to a workstation, it is typically stored in this cache using the hard disk of the workstation. On subsequent access to this page by the workstation, the workstation and/or remote server can often determine that the page has not been modified or that only parts of the data in local storage have been altered, instead of adding load to the network lines. Most web browser programs include the local caching of accessed web pages. While these caches are widely used and accepted, they have limited applications. Generally, each of these caches uses a local data storage drive accessible from a particular workstation. Each workstation can be equipped with such a cache, but the cache of one workstation is generally not accessible from another. Accordingly, even if a web page has been previously accessed by other workstations in the local area network, a workstation that has not previously accessed this page cannot retrieve it from the caches of other workstations. Network bandwidth is effectively wasted in operational environments where it is likely that each of the workstations will continuously access the same web page or pages repeatedly. Additionally, more processor and hard disk load on server resources are also re-expended with each client request.
In a corporate or other group setting, it is common for many users sharing similar interests to frequently access the same material from the web, albeit through any of numerous different workstations. For instance, investment firms may wish to track the continually changing stock market using a group of employees and/or consultants, each provided with a workstation. These workstations are typically connected to one or more local servers to provide internet access. Generally, this creates a heavy data transfer load between the local server(s) and the remote data server. The same web page is transferred repeatedly, but each time to a different workstation. Furthermore, while individual client workstations may have local caches, a connection to the remote server is still necessary to determine whether a page has changed since a particular workstation's last access to it. Until now, the main solution to this efficiency issue has been to add more bandwidth and equipment, often at a significant cost compared to the performance gain achieved.
The mentioned systems are most critically needed in various sectors, particularly in healthcare, machinery, communications, telecommunications, and information systems. The necessity to manage large volumes of data and to deliver services to customers rapidly can be of vital importance in these fields. In such areas, the use of big data and the requirement for high-specification hardware to quickly process and visually present data are evident. Organizing data based on the frequency of use and ensuring the desired data reaches the user involves significant investments of time and labor.
The use of high-specification hardware necessary to overcome the mentioned disadvantages can lead to the production of heat energy loss and the need for temperature reduction, noise pollution, and the generation of hazardous waste due to unusable materials resulting from maintenance and repair processes. These factors can have adverse effects on the environment and ecosystem.
In the preliminary patent research, the following documents were encountered.
In the Chinese document with publication number CN114371892A, a type of memory-based EMS system rule engine is described. This engine specifically uses a cache-based rule engine, Java language map structure for storage, and a map structure that operates based on keys and values. Access to data is faster than from disk, with no network transmission and delay. Once a decision needs to be made, actions can be triggered and executed in a timely manner, effectively enhancing the overall system working efficiency and ensuring overall operational safety (such as timely alarms, timely notifications). This can help businesses reduce costs and increase efficiency. However, the mentioned rules are created manually, and there is no algorithm or method to adapt to continuous data input and output. The requirement for substantial memory usage and the accessibility of information based on specific classification rules does not solve all the problems mentioned previously.
The Chinese document with publication number CN114328779A proposes a cloud-based geographical information cloud disk aimed at efficiently acquiring and browsing data, including the storage of geographical information data in the cloud. This proposal targets the characteristics of large-scale diversification, rapid circulation, strong timing, and the high value of spatial big data. Even if geographical information undergoes certain changes, these changes are not conducive to high-frequency data entry. Moreover, the ability to access data from the cloud system can lead to security issues and necessitates a specific server requirement for data storage in the cloud. Continuous use of the system inevitably leads to significant heating of these servers and a constant need for maintenance. This situation can result in negative consequences both financially and in terms of environmental sensitivity.
The U.S. document with publication number US20210209020A1 describes a method for ensuring the targeted caching of data. The method includes receiving a user input containing an account identifier; automatically identifying a subset of data previously accessed by the user using an activity log and the account identifier; generating a copy of the data subset; associating the copy with the data subset by linking the copy to the data subset; and storing the copy in a temporary file storage. These technical benefits are detailed extensively. The method also involves receiving a request for the data subset from the user and displaying the copy in a graphical user interface. While this system can provide the necessary reliability in terms of speed, it requires high-specification hardware for operation and is based on a cache system that relies on pre-targeted data and user identification, rather than adapting to user habits. The Chinese document with publication number CN106055690B describes a method for rapid retrieval and acquisition of data characteristics based on attribute matching. The method comprises steps S1 , establishing an attribute matching model; and S2, rapid access steps based on the attribute matching model. It utilizes fast processing, rapid memory selection, and a multi-tiered caching technology. Using this method, a matching result is quickly obtained, and the reusability of the matching result is enhanced. A memory database is introduced to perform the caching of retrieval data, and memory data is used to perform the calculation of an intermediate result, thus shortening the bottleneck of traditional retrieval methods from a hard disk perspective and improving data output speed. To create a cache system for retrieval, data must be stored in memory, and as data input increases, so does the required memory. Consequently, the physical space needed for rapid access to relevant data increases, and the document does not describe the system’s adaptability to frequent use of data. That is, the speed of access to frequently versus infrequently used data depends on the choice of memory where the data is stored, necessitating prior classification and organization of data, which leads to increased costs and labor.
The USPTO document with publication number US9465631 B2 describes an automatic caching system that enables faster computation of dependent results and automatically identifies user-relevant points to be incrementally cached, which are costly to obtain. The system intelligently chooses between locally caching data and sending computation to a remote location co-located with the data, thus accelerating the computation of results. The automatic caching system uses stable keys uniquely referring to programmatic identifiers; adds annotations to programs before execution with additional code using keys to associate and cache interim programmatic results; and can maintain the cache in a separate process or even a separate machine to allow the cached results to outlive program execution and be used by subsequent executions. It uses cost estimations to decide whether using cached values or remote execution will result in faster computation of results. The operation of the system requires both a powerful server and high-specification hardware due to the large amount of cached data. The acceleration of processes is prioritized based on the benefits provided by predetermined input types. This classification and ordering must be transmitted to artificial intelligence either by simultaneously using different programs, manually creating these classification inputs, manually checking them, or employing another Al-equipped program to understand the benefit of controlling the program.
The EPO document with publication number EP3062482A1 describes a method and system for high-speed wireless data reception from a caching device to a portable device. The explained methods and systems talk about a general method and/or system for end-to-end transfer from a content provider to a portable user device, which can be integrated with combinable technology. In this study, high-capacity data storage areas can be used, however, the limits of the caching device are not sufficient compared to the desired limits, and there is no program to organize the data and to determine which data needs to be delivered faster.
The PCT document with publication number W02015101827A1 describes a system, method, and apparatus for a main memory (MM) and a configurable auxiliary processor (CP) chip to process a subset of network functions. The document details how data can be processed quickly and how crucial data can be automatically classified and analyzed. However, it does not explain how the system responds to high-speed data inputs or outputs, nor does it describe how these outputs are visually presented to the user intending to receive the data.
The U.S. document with publication number US7735076B2 describes methods for efficiently loading extensions or "plug-ins" into a host software program. In a preferred arrangement, when the host program is initially launched, extensions registered with the host software program can be loaded. Desired data can be loaded into a database in the form of extensions, and this can be processed using a powerful processor with less RAM (Random Access Memory) than usual. The invention facilitates the transfer of large amounts of data to programs, but it requires hardware of a higher technological standard than conventional equipment.
The U.S. document with publication number US20060179123A1 describes a method for faster access to frequently updated data using a web group to automatically download from a remote server. The web group stores data in a cache that can be accessed from any of numerous browser-equipped workstations. These workstations are connected via a communication network to a web farm, which includes one or more local servers and associated data storage devices. The cache used must be able to store large amounts of data, and the servers must continuously work to organize new data. This can lead to physical heating and the need for constant maintenance of the servers, as the data is redundantly stored on them, and any failure could cause a delay in the system switching to the backup and slower access to critical data. Additionally, a physically large space is required to store the data storage hardware. Increasing the number of local servers could reduce the need for physical storage space but also somewhat hinders the distribution of data and the rapid accessibility of important information. An increase in the number of servers can slow down data access because an online data detection system is necessary.
As a result, all the problems mentioned above have made it imperative to innovate in the relevant field.
OBJECTIVES OF THE INVENTION
The present invention aims to eliminate the problems mentioned above and to achieve a technical innovation in the related field.
The main objective of the invention is to enable quick access to search results or necessary information through its software.
Another objective of the invention is to minimize wear that can occur during use by reducing the usage duration of the hard disk, which is the most physically deteriorated component in big-data systems, and to allow maintenance to be performed over a broader time frame.
A further objective of the invention is to reduce the usage duration of the hard disk and processor in big-data systems, thereby preventing heat and energy loss as well as noise pollution.
BRIEF DESCRIPTION OF THE INVENTION
To achieve all the objectives mentioned above and as will be detailed below, the current invention relates to a data display method involving a cache with search parameters. This method aims to accelerate the display of relevant data on the user interface used by the user, shorten the loading times of data inputs and outputs during operations, and reduce the frequency of use of servers and hardware that store and/or transmit data. This is characterized by:
• Detecting previously searched data in the cache,
• Checking the conditions for retrieving the detected data from the cache, Controlling the timeliness of the trigger and, based on the detection, deciding whether to call the data from the cache or the database.
In a preferred embodiment, the invention characterizes a method that allows for user or user group-specific classification within the mentioned trigger control, including:
• Categorizing data for classification,
• Checking whether the search parameters meet the conditions predefined by the user,
• Comparing the searched data with the class in the cache.
Another preferred embodiment of the invention is a method that checks whether the cache is in active or passive state for retrieving desired data from the cache, and if the trigger is not set, checks whether the relevant data has been cached.
Another preferred embodiment is a method where, if the cache is set when the data is retrieved from the database, the retrieved data is written to the cache.
In another preferred embodiment, in the condition where scheduled intervals are defined, it prioritizes:
• The oldest refreshed data,
• The most requested data,
• The data with the most recent request date for sorting, and refreshes the trigger by grouping these data requests.
Another preferred embodiment is a method that checks whether the criteria for caching the relevant data are met if the caching feature is active.
Another preferred embodiment is a method that, if the trigger is set, checks the date of the cache entry and retrieves the output from the cache according to its dynamic date range compatibility.
Another preferred embodiment is a method where, if the cache is set and the trigger is set, but the cache entry date does not match the dynamic date range, the relevant data is retrieved from the database.
Apart from the above, the invention is a data display method involving a cache with search parameters, and accordingly, for setting conditions for search data, involves at least one user or user group: a) Using an interface designed to enable the entry of data into the mentioned program, b) Introducing cache suitability criteria to the program to allow the user to decide whether the data is worth caching, c) Determining and entering into the program the duration and frequency of refreshing the dynamic date range that includes program usage frequency to assist in the program's decision to retrieve output from the cache, d) Storing in physical hardware used as a cache the data deemed suitable for output from the cache to reach the user, e) Using physical hardware as a database for data deemed unsuitable for retrieval from the cache until they become suitable.
A preferred embodiment of the invention includes a method step of classifying and introducing data to the program to prevent unwanted data from being recorded in the cache before step b).
Another preferred embodiment of the invention is the customization of the caching criteria mentioned in method step b) for a specific user or user group.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 provides the algorithm of the program possessed by the invention. Details not necessary for understanding the present invention may have been omitted. Besides, elements that are at least substantially identical, or at least have substantially identical functions, are shown.
DETAILED DESCRIPTION OF THE INVENTION
In this detailed description, the subject of the invention, " Artificial Intelligence-Powered Rapid Search Result Display System with Adaptation to User Habits," is explained with examples that do not create any limiting effect, merely for better understanding. The invention relates to a program and associated method that enable quick access to frequently used data for users.
Figure 1 presents the algorithm of the program used. Data outputted from the cache enables fast access protected by the invention, whereas data outputted from the database is prior art. Accordingly, when a user wants to access data, it is necessary to know whether the desired data is in the cache. If the trigger is set, the condition for data coming from the cache can be time-dependent and considered invalid if cached before the time set by the trigger. For the trigger to function, certain information must be defined in the program. These include criteria for retrieving information from the cache, dynamic date range, the concept of "newness," and specifically data that should be directly retrieved from or not retrieved from the cache. The concept of "new" or "fresh" is introduced to the program, determining whether the desired data in the cache is new or not. If the date range is set to a maximum of 30 days and introduced to the program, it will consider the data new and quickly retrieve it from the cache if the data has been cached within the last 30 days; otherwise, it will deem the data old and proceed to the step of querying whether the caching of the data is active. If the data is not new and not set for caching, the output regarding the data will be retrieved from the database until caching is enabled for the data.
In a preferred embodiment, the program's required date range for classifying the data as new is dynamic and programmable. For example, if the freshness period is set to 5 days, the program can count back 5 days from the current date at 12:00 PM, 120 hours, or 432000 seconds to control the period that needs to pass for the data to be considered old. Dynamic date setting prevents old data from being unnecessarily stored in the cache, making room for new data entries. Data older than the set dynamic date range is cleared during cache refreshing.
Cache refreshing occurs under the control of the triggering system, which is a key innovation of the invention. During cache refreshing, the suitability of data in the cache to criteria is checked, and data deemed not worth storing in the cache is removed. The frequency of data refreshing is determined by advancing the oldest date in the dynamic date range for each refresh, giving the date range a dynamic property. The frequency of data refreshing is introduced to the program for automatic data refreshing in the cache. The criteria controlled in data refreshing in a preferred embodiment are:
• The oldest refreshed date,
• The most requested data,
• The latest date of last request.
In a preferred embodiment, the minimum number of different users as part of the criteria for caching data can also be defined, allowing data to be considered for caching based on the minimum number of different users, even if it does not reach the total number of general user usages. This prevents the cache from unnecessarily storing data repeatedly searched by the same users. For instance, if the maximum period considered new is 50 days and the caching criteria are as follows:
• Minimum number of times data is searched by total users: 250 According to the above, for the data to be cached and set to the trigger, two individuals need to use the same data 125 times each. In an organization with a total of 20 individuals, if only 2 individuals search the data enough times, the trigger system will make the data quickly accessible for the 18 individuals who do not use the data. This means the program unnecessarily keeps the data in the cache for the 18 individuals who do not wish to use it.
Considering the criteria as follows:
1 . Minimum number of total user searches for the data: 250
2. Minimum number of users wanting to access the data: 10
In this case, it is not sufficient for the data to be used at least 250 times by total users; at least 10 individuals must have used the data.
In an alternative embodiment, both the maximum determined time for the data to have been accessed and the minimum number of users are essential. That is, according to the example above, both criteria 1 and 2 are not linked with "or," and criterion 2 is considered essential. Therefore, for the data to be cached, it must have been requested within a maximum of 50 days and by at least 10 users. Thus, two users requesting the data at least 125 times each within 50 days is not sufficient.
If the cache is not set in the program for the relevant data or if the data does not meet the caching criteria, the data is retrieved from the database, and the decision returns to whether the cache is set. A set cache means that the cache is usable, i.e. , writing data to and retrieving data from the cache is open. Therefore, according to the algorithm given in Figure 1 , if the cache is not set when requesting the data, the output will always be retrieved from the database. Even if the cache is set, if the data is not fresh according to the new trigger, the data will be retrieved from the database, not the cache. Data that has not been written to the cache can never be retrieved from the cache. The answer to whether the data is cached is queried by the program as whether the data is in the cache or not. If it is in the cache, it is retrieved from the cache; if not, it is retrieved from the database and written to the cache since the cache was previously set. Thus, the next time the same data is requested, if the previously mentioned conditions are met, the data will be retrieved from the cache, not the database. For example, if data is requested for the first time and the cache is not set, it will be retrieved from the database. If the same data is requested again and the cache is not open, it will again be retrieved from the database. The same will be true even if the data is requested repeatedly. Once the data is retrieved from the database, it will continually check whether the cache is open, i.e. , whether the cache is set. When the cache is opened, the cache will be set, and the data will be written to the cache. Thus, when data is searched for the first time and the cache is closed, it will be retrieved from the database and written to the cache once the cache is opened.
When the cache is open, i.e., the cache is set, and the data is searched for the first time, the program will query whether the trigger is set. Since the trigger is the part where the criteria for retrieving data from the cache are examined, and the data is searched for the first time, the answer will be no. The next question will be whether the data is cached, i.e., whether it has been previously written to the cache. Since it is the first search, the answer will be no, and the relevant data will be retrieved from the database. Then, the program will query again whether the cache is set, and as mentioned at the beginning of the paragraph, since the answer is yes, the data will be written to the cache.
The trigger controls when the cache is invalid, i.e., it ensures that data before a certain date is not retrieved from the cache. The novelty of the invention depends on the trigger being set, and both the caching criteria and the dynamic date range are controlled by the trigger. The trigger activates the rapid retrieval of necessary data from the cache. If the trigger is set, the program checks the data's caching criteria and decides on its compliance with the date ranges. If the answer to whether the trigger is set is yes, it is decided whether the triggered data fits the dynamic date range introduced to the program previously. If the triggered data is fresh (new) according to time, it is retrieved from the cache, and the data is quickly presented. If the trigger is not set, i.e., in passive mode, having the cache set and the data previously cached is sufficient for retrieving the output related to the data from the cache, but if the trigger is set, access to the cache can be provided by checking the cache retrieval conditions and the dynamic date range's compliance. Every scenario where the trigger is not set is technically acceptable. Thanks to the program described above, the invention enables an increase in the efficiency of hardware usage in the real world and allows hardware, which would otherwise perform cache operations slowly, to work much faster compared to conventional systems. This enables cost reduction by using lower-cost hardware.
Especially by reducing the frequency of use of servers and hardware that store and/or transmit data, the invention increases the time between two maintenance periods, allowing for longer hardware life.
Similarly, another physical impact of the invention is reducing heat emissions from servers without any extra cooling system, thereby reducing heat pollution to the environment and providing an improvement against hardware problems caused by heat. Additionally, reducing the frequency of use of hardware or not needing to operate at maximum performance can also prevent noise pollution. This is also important for the health of people working in proximity to hardware affected by heat and noise pollution.
Reducing the frequency of use of servers and hardware that store and transmit data also enables a reduction in energy usage and related costs.
The invention is a method containing a program that enables quick access to data, and in this context, it involves method steps of:
- Using memory (RAM) and a database for storing data upon their entry into the program,
- Creating an interface that enables the searching of data previously in the program for usage purposes,
- Creating an interface that displays and enables the use of the relevant data after the search.
The method for quick access to data is implemented with the program whose algorithm is found in Figure 1 and described in the detailed explanation. The scope of protection for the invention is specified in the claims attached, and it should not be limited solely to the examples provided in this detailed explanation. Indeed, it is evident that a person skilled in the art could develop similar structures in light of the above descriptions, without departing from the main theme of the invention.

Claims

1. The invention is a data display method involving a cache with search parameters to accelerate the display of relevant data on the user interface used by the user, shorten the loading times of data inputs and outputs during operations, and reduce the frequency of use of servers and hardware that store and/or transmit data. This method is characterized by: a) Detecting previously searched data within the cache, b) Checking the conditions for retrieving the detected data from the cache, c) Controlling the timeliness of the trigger and, based on the detection, deciding whether to call the data from the cache or the database.
2. A method in accordance with claim 1 , characterized by performing user or user group-specific classification in the mentioned trigger control: a) Categorizing data for classification, b) Checking if the search parameters meet the conditions predefined by the user, c) Comparing the searched data with the class in the cache.
3. A method in accordance with claim 1 , characterized by checking if the cache is set in active or passive state for retrieving the desired data from the cache; and if the trigger is not set: a) Checking if the relevant data has been cached.
4. A method in accordance with claim 1 , characterized by writing the retrieved data to the cache if the cache is set when the data is retrieved from the database.
5. A method in accordance with claim 1 , characterized in the condition where scheduled intervals are defined for prioritizing and sorting: a) The oldest refreshed data, b) The most requested data, c) The data with the most recent request date, and refreshing the trigger by grouping these data requests.
6. A method in accordance with claim 2, characterized by checking whether the criteria for caching the relevant data are met if the caching feature is active.
7. A method in accordance with claim 1 , characterized by checking the date of the cache entry and retrieving the output from the cache according to its dynamic date range compatibility if the trigger is set.
8. A method in accordance with claim 5, characterized by retrieving the relevant data from the database if the cache entry date does not match the dynamic date range when the cache is set and the trigger is set.
9. A data display method involving a cache with search parameters, characterized by steps for setting conditions for search data by at least one user or user group: a) Using an interface designed to enable the entry of data into the mentioned program, b) Introducing cache suitability criteria to the program for the user to decide whether the data is worth caching, c) Determining and entering into the program the duration and frequency of refreshing the dynamic date range that includes program usage frequency to assist in the program's decision to retrieve output from the cache, d) Storing in physical hardware used as a cache the data deemed suitable for output from the cache to reach the user, e) Using physical hardware as a database for data deemed unsuitable for retrieval from the cache until they become suitable.
10. A data display method in accordance with claim 9, characterized by a method step that includes classifying and introducing data to the program to prevent the recording of unwanted data in the cache before step b.
PCT/TR2023/051459 2022-12-06 2023-12-05 Artificial intelligence-powered rapid search result display system with adaptation to user habits WO2024123293A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2022018685 2022-12-06
TR2022/018685 2022-12-06

Publications (1)

Publication Number Publication Date
WO2024123293A1 true WO2024123293A1 (en) 2024-06-13

Family

ID=91379963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2023/051459 WO2024123293A1 (en) 2022-12-06 2023-12-05 Artificial intelligence-powered rapid search result display system with adaptation to user habits

Country Status (1)

Country Link
WO (1) WO2024123293A1 (en)

Similar Documents

Publication Publication Date Title
US11263140B2 (en) Cache aware searching based on one or more files in one or more buckets in remote storage
US7836056B2 (en) Location management of off-premise resources
CN104834675B (en) Query performance optimization method based on user behavior analysis
US6820085B2 (en) Web system having clustered application servers and clustered databases
US8219544B2 (en) Method and a computer program product for indexing files and searching files
US8566788B2 (en) Predictive prefetching to improve parallelization of data retrieval subtasks
US20060294311A1 (en) Dynamic bloom filter for caching query results
US20120203797A1 (en) Enhanced control to users to populate a cache in a database system
CN107491463B (en) Optimization method and system for data query
JP5322019B2 (en) Predictive caching method for caching related information in advance, system thereof and program thereof
CN115269631A (en) Data query method, data query system, device and storage medium
US20070288447A1 (en) System and Method for the Aggregation and Monitoring of Multimedia Data That are Stored in a Decentralized Manner
US8200673B2 (en) System and method for on-demand indexing
US20080005252A1 (en) Searching users in heterogeneous instant messaging services
WO2024123293A1 (en) Artificial intelligence-powered rapid search result display system with adaptation to user habits
US20210141763A1 (en) Systems and Methods for Large Scale Complex Storage Operation Execution
JP5488271B2 (en) Search device
KR102415155B1 (en) Apparatus and method for retrieving data
US10944756B2 (en) Access control
CN101052944B (en) Systems and methods for fine grained access control of data stored in relational databases
US11055266B2 (en) Efficient key data store entry traversal and result generation
CN113127717A (en) Key retrieval method and system
CN116561825B (en) Data security control method and device and computer equipment
Gupta et al. A novel user trend‐based priority assigner and URL scheduler for dynamic incremental crawling
Mikami et al. Lazy view maintenance for social networking applications