US20150242326A1 - System and Method for Caching Time Series Data - Google Patents

System and Method for Caching Time Series Data Download PDF

Info

Publication number
US20150242326A1
US20150242326A1 US14/628,463 US201514628463A US2015242326A1 US 20150242326 A1 US20150242326 A1 US 20150242326A1 US 201514628463 A US201514628463 A US 201514628463A US 2015242326 A1 US2015242326 A1 US 2015242326A1
Authority
US
United States
Prior art keywords
time series
series data
time
expiry
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/628,463
Other languages
English (en)
Inventor
Arvind Jayaprakash
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InMobi Pte Ltd
Original Assignee
InMobi Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InMobi Pte Ltd filed Critical InMobi Pte Ltd
Publication of US20150242326A1 publication Critical patent/US20150242326A1/en
Priority to US15/913,744 priority Critical patent/US10191848B2/en
Assigned to INMOBI PTE. LTD. reassignment INMOBI PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JAYAPRAKASH, ARVIND
Priority to US16/259,621 priority patent/US10725921B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/465Structured object, e.g. database record
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/466Metadata, control data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/602Details relating to cache prefetching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6024History based prefetching

Definitions

  • the present invention relates to time series data, and in particular, it relates to caching of time series data.
  • Time series data refers to sequences of data points measured over a. span of tune, often spaced at uniform time intervals.
  • Time series data is often stored on a remote server known as historian.
  • the historian is responsible for collecting raw time series data and cleaning raw time series data. For analysis and query purposes, time series data is fetched from the historian.
  • tune series data due to the ever increasing size of tune series data, retrieval of time series data is an expensive operation in terms of network resources.
  • the present invention provides a computer system for caching time series data.
  • the computer system includes one or more processors, at least one cache, and a computer readable storage medium.
  • the computer readable storage medium contains instructions that, when executed by the one or more processors, causes the one or more processors to perform a set of steps comprising: fetching the time series data from a time series data source, calculating one or more expiry timestamps, grouping the plurality of time series datum in to one or more time data chunks based on the one or more expiry timestamps and storing a copy of the time series data and the one or more expiry timestamps in the at least one cache
  • the time series data includes a plurality of time series datum and a fetch timestamp.
  • Each expiry timestamp from the one or more expiry timestamps is calculated using a composite function of the fetch timestamp of the time series data and a recording time associated with a time series datum, such that the expiry timestamp is inversely proportional to the recording time associated with the time series datum and directly proportional to the fetch timestamp of the time series data.
  • Each time data chunk from the one or more time data chunks includes a distinct set of time series datum from the time series data.
  • the one or more processors are configured to receive a request for the time series data, decompose the request into one or more sub requests based on the one or more time data chunks of the time series data, determine the validity of the one or more time data chunks of the time series data based on the one or more expiry timestamps; and serve the one or more sub requests from one of a group consisting of the time series data source and the at least one cache, based on the validity of the one or more time data chunks of the time series data.
  • the one or more processors are configured to determine the validity of the one or more time data chunks by comparing an associated expiry timestamp with a request timestamp associated with the request.
  • the composite function is monotonically non-increasing function with a predetermined upper limit.
  • the at least one cache is a browser cache.
  • the present invention provides a computer implemented method for caching time series data.
  • the computer implemented method comprises fetching, by one or more processors, the time series data from a time series data source, calculating, by one or more processors, one or more expiry timestamps, grouping, by one or more processors, the plurality of time series datum in to one or more time data chunks based on the one or more expiry timestamps, and storing, by one or more processors, a copy of the time series data and the one or more expiry timestamps in at least one cache.
  • FIG. 1 illustrates a system for caching time series data, in accordance with various embodiments of the present invention
  • FIG. 2 illustrates a flowchart for serving time series data, in accordance with various embodiments of the present invention
  • FIG. 3 illustrates a flowchart for caching time series data, accordance with various embodiments of the present invention
  • FIG. 4 illustrates a flowchart for caching time series data and serving a request for time series data, in accordance with various embodiments of the present invention.
  • FIG. 5 illustrates a computer node for caching tune series data, in accordance with various embodiments of the present invention.
  • FIG. 1 illustrates a system 100 for caching time series data, in accordance with various embodiments of the present invention.
  • the computing system 100 includes a user terminal 110 .
  • the user terminal 110 refers to a workstation or a terminal used by a user 120 .
  • the user terminal 110 includes one or more processors, a computer readable storage medium and at least one cache.
  • the user terminal 110 allows the user 120 to retrieve the time series data from a historian server 130 , which is a time series data source.
  • the user terminal 110 includes a browser with a browser cache.
  • the user 120 uses the browser to view the time series data.
  • the browser stores the time series data in the browser cache for faster access.
  • the user terminal 110 stores time series in the at least one cache for faster access.
  • the user terminal 110 allows the user 120 to execute queries on the time series data present on the historian server 130 .
  • the historian server 130 stores the time series data.
  • the historian server 130 includes one or more processors, a computer readable storage medium and at least one server side cache.
  • the historian server 130 receives the time series data from a cluster of application servers 140 , which includes a plurality of time series data sources.
  • the cluster of application servers 140 includes application server 142 , application server 144 , and application server 146 .
  • Each application server from the cluster of application servers 140 generates a log file which contains the time series data.
  • the log files are sent to the historian server 130 by the cluster of application servers 140 .
  • the cluster of application servers 140 are advertisements tracking servers that maintain logs about countable metrics, such as clicks & impressions, etc.
  • the historian server 130 assimilates the log files, cleans the data in the log files by removing inconsistent and incomplete records and stores resultant time series data. In an embodiment, the historian server 130 stores recently accessed time series data in the least one server side cache for faster service.
  • FIG. 1 shows the historian server 130 as a single computing device
  • the historian server 130 can include multiple computing devices connected together.
  • FIG. 1 shows a single user terminal 110
  • the plurality of computing devices includes a chain of computing devices connected sequentially.
  • the present invention is compatible with the chain of computing devices.
  • FIG. 1 shows four application servers 142 , 144 and 146 , there can be one or more application servers in the cluster of application servers 140 . Further, the application servers can be used for a plurality of purposes such as factory metrics recording, product quality control and automation, weather monitoring, etc.
  • FIG. 2 illustrates a flowchart 200 for serving a request for time series data, in accordance with various embodiments of the present invention.
  • the flowchart 200 is implemented using the user terminal 110 .
  • the flowchart 200 initiates.
  • the user terminal 110 receives a request for time series data.
  • the user 120 requests for time series data between 2 pm and 6 pm.
  • the requested time series data has 6 time series datum, since the granularity of sampling is 1 hour in the current example
  • the user terminal 110 calculates one or more expiry timestamps. Continuing the above mentioned example, the user terminal 110 calculates each expiry timestamp from the one or more expiry timestamps using a composite function of a recording time of a time series datum and the current time 7 pm. Each expiry timestamp calculated using the composite function is inversely proportional to the recording time associated with the time series datum (indicated using the notation T record ) and directly proportional to the current time (indicated using the notation T current ).
  • T record the time series datum
  • T current time indicated using the notation T current
  • the composite function is a monotonically non increasing function between recording timestamp and expiry timestamp, i.e., as the recording time increases the value of the expiry timestamp decreases.
  • each expiry timestamp is calculated using the composite function; each expiry timestamp for each time series datum present in the tune series data.
  • An illustration of a possible composite function with an upper limit of 24 hour is defined below:
  • the user terminal 110 decomposes the request into one or more sub requests based on the one or more expiry timestamps.
  • the request is broken in two sub requests, the first sub request for the time series datum having expiry timestamp 8:00 pm and the second sub request for the time series data having expiry timestamp 7:15 pm.
  • the user terminal serves the one or more sub request from at least one of the historian 130 and a cache memory.
  • the user terminal 110 will query the cache memory to determine if a copy of time series data exists in the cache, if the copy of time series data exists in the cache the user terminal 110 serves a sub-request from the cache. If the cache does not contain the copy of the time series data, the user terminal 110 retrieves the time series data from the historian server 130 .
  • the flowchart 200 terminates.
  • FIG. 3 illustrates a flowchart 300 for caching the time series data, in accordance with various embodiments of the present invention.
  • the flowchart 300 is implemented using the historian server 130 .
  • the flowchart 300 initiates.
  • the historian server 130 fetches the time series data from the application servers 140 .
  • the historian server 130 fetches the time series data in response to a determination that a valid copy of the time series data does not exist locally.
  • the historian server 130 fetches the time series data in response to a user query.
  • the historian server 130 automatically fetches the time series data in response to a pre-fetching policy.
  • the time series data includes a plurality of time series datum and a fetch timestamp.
  • the fetch timestamp indicates the time at which the time series data was fetched by the historian server 130 .
  • the historian server 130 records the fetch timestamp on receiving a request.
  • the time series data includes ten time series datum: ⁇ 45, 65, 78, 90, 112, 120, 123, 145, 170, 210 ⁇ and a fetch timestamp: 9 pm.
  • the time series data has a sampling period of half an hour.
  • the first time series datum 45 is recorded at 2 pm
  • the second time series datum 65 is recorded at 2:30 pm
  • the rest of the time series data similarly recorded with a periodic interval of half an hour.
  • the fetch timestamp 9 pin indicates that the time series data was fetched at 9 pm.
  • the historian server 130 calculates one or more expiry timestamps.
  • Each expiry timestamp from the one or more expiry timestamps is calculated using a composite function of a recording time of a time series datum and the fetch timestamp.
  • Each expiry timestamp calculated using the composite function is inversely proportional to the recording time associated with the time series datum and directly proportional to the fetch timestamp of the time series data.
  • the composite function is a monotonically non increasing function between recording timestamp and expiry timestamp, i.e., as the recording tune increases the value of the expiry timestamp decreases.
  • each expiry timestamp is calculated using the composite function; each expiry timestamp for each time series datum present in the time series data.
  • An illustration of a possible composite function with an upper limit of 24 hour is defined below:
  • T expiry F composite (T record , T fetch )
  • the historian server 130 groups the plurality of time series datum into one or more time data chunks based on the one or more expiry timestamps. In an embodiment, all the time series datum which have the same expiry timestamp value are grouped together to form one time data chunk. Each time data chunk from the one or more time data chunks comprises a distinct set of time series datum from the time series data, i.e. no two time data chunks can have a common time series datum.
  • the user terminal groups the ten time series datum into two time data chunks.
  • the historian server 130 stores a copy of the time series data and the one or more expiry timestamps in the at least one cache. Expiry timestamps serve as indicators about the validity of the time series data stored in the at least one cache. Any request for the time series data will check the expiry timestamps to verify if the time series data stored in the at least one cache is valid or not.
  • the flowchart 300 terminates.
  • FIG. 4 illustrates a flowchart 400 for caching time series data and serving a request for time series data, in accordance with various embodiments of the present invention.
  • the flowchart. 400 is be implemented the historian server 130 .
  • Step 405 the flowchart 400 initiates. Steps 410 - 425 of the flowchart 300 are similar to the steps 310 - 350 of the flowchart 300 .
  • the historian server 130 fetches the time series data from the historian server 130 .
  • the historian server 130 calculates one or more expiry timestamps.
  • Each expiry timestamp from the one or more expiry timestamps is calculated using a composite function of a recording time of a time series datum and the fetch timestamp.
  • Each expiry timestamp calculated using the composite function is inversely proportional to the recording time associated with the time series datum and directly proportional to the fetch timestamp of the time series data.
  • the composite function is a monotonically non increasing function between recording timestamp and expiry timestamp, i.e., as the value of recording time increases the value of the expiry timestamp decreases.
  • the historian server 130 groups the plurality of time series datum into one or more time data chunks based on the one or more expiry timestamps. In an embodiment, all the time series datum which have the same expiry timestamp value are grouped together to form one time data chunk. Each time data chunk from the one or more time data chunks comprises a distinct set of time series datum from the time series data, i.e. no two time data chunks can have a common time series datum.
  • the historian server 130 stores a copy of the time series data and the one or more expiry timestamps in the at least one cache.
  • the historian server 130 receives a request for the time series data.
  • the user 120 makes the request using the browser on the user terminal 110 .
  • the user terminal receives a request for the time series data recorded between 2 pm and 6:30 pm at 9:45 pm.
  • the historian server 130 decomposes the request into one or more sub requests based on the one or more time data chunks of the time series data. Continuing the abovementioned example, the request is broken in two sub requests, the first sub request for the first time data chunk and the second sub request for the second time data chunk.
  • the historian server 130 determines the validity of the one or more time data chunks of the time series data based on the one or more expiry timestamps. Continuing the abovementioned example, the historian server 130 determines the validity of the first time data chunk and the second time data chunk. The first time data chunk has the expiry timestamp 10 pm, and therefore is valid at 9:45 pm. The second time data chunk has the expiry timestamp 9:15 pm, and therefore is invalid at 9:45 pm.
  • the historian server 130 serves the one or more sub requests from one of a group consisting of the time series data source and the at least one cache, based on the validity of the one or more time data chunks of the time series data.
  • the historian server 130 serves the first sub request by retrieving the first time data chunk from the at least one cache as the first time data chunk is still valid. Since the second time data chunk is invalid, the historian server 130 serves the second sub request by fetching the second time data chunk from the application servers 140 .
  • the flowchart 300 terminates.
  • the present invention By decomposing the request into sub request and chunking the time series data into time data chunks, the present invention is able to create an optimal caching policy for the time series data. Since older time series data values are less likely to change, the composite function is able to calculate expiry timestamps which are reflective of this property. Moreover, by dividing the request in sub requests, the present invention is able to ensure that only the time data chunks which are invalid are served from the time series sources and not the entire time series data. By doing so, the present invention is able to reduce network load and improve speed of access.
  • FIG. 5 illustrates a computer node 500 for caching tune series data, in accordance with various embodiments of the present invention.
  • the components of the computer node 500 include, but are not limited to, one or more processors 530 , a memory module 555 , a network adapter 520 , an input-output (I/O) interface 540 and one or more buses that couples various system components to one or more processors 530 .
  • the one or more buses represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
  • the computer node 500 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by the computer node 500 , and includes both volatile and non-volatile media, removable and non-removable media.
  • the memory module 555 includes computer system readable media in the form of volatile memory, such as random access memory (RAM) 560 and at least one cache 570 .
  • the computer node 500 may further include other removable/non-removable, non-volatile computer system storage media.
  • the memory module 555 includes a storage system 580 .
  • the computer node 500 communicates with one or more external devices 550 and a display 510 , via input-output (I/O) interfaces 540 .
  • the computer node 500 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (for example, the Internet) via the network adapter 520 .
  • networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (for example, the Internet) via the network adapter 520 .
  • aspects can be embodied as a system, method or computer program product. Accordingly, aspects of the present invention can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system” Furthermore, aspects of the present invention can take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium can be a computer readable storage medium.
  • a computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium can be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations for aspects of the present invention can be written in any combination of one or more programming languages, including an object oriented programming language and conventional procedural programming languages.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
US14/628,463 2014-02-24 2015-02-23 System and Method for Caching Time Series Data Abandoned US20150242326A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/913,744 US10191848B2 (en) 2014-02-24 2018-03-06 System and method for caching time series data
US16/259,621 US10725921B2 (en) 2014-02-24 2019-01-28 System and method for caching time series data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN896CH2014 IN2014CH00896A (enrdf_load_stackoverflow) 2014-02-24 2014-02-24
IN896/CHE/2014 2014-02-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/913,744 Continuation US10191848B2 (en) 2014-02-24 2018-03-06 System and method for caching time series data

Publications (1)

Publication Number Publication Date
US20150242326A1 true US20150242326A1 (en) 2015-08-27

Family

ID=53882341

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/628,463 Abandoned US20150242326A1 (en) 2014-02-24 2015-02-23 System and Method for Caching Time Series Data
US15/913,744 Active US10191848B2 (en) 2014-02-24 2018-03-06 System and method for caching time series data
US16/259,621 Active US10725921B2 (en) 2014-02-24 2019-01-28 System and method for caching time series data

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/913,744 Active US10191848B2 (en) 2014-02-24 2018-03-06 System and method for caching time series data
US16/259,621 Active US10725921B2 (en) 2014-02-24 2019-01-28 System and method for caching time series data

Country Status (2)

Country Link
US (3) US20150242326A1 (enrdf_load_stackoverflow)
IN (1) IN2014CH00896A (enrdf_load_stackoverflow)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012095A1 (en) * 2014-07-11 2016-01-14 Canon Kabushiki Kaisha Information processing method, storage medium, and information processing apparatus
CN110633277A (zh) * 2019-08-13 2019-12-31 平安科技(深圳)有限公司 时序数据存储方法、装置、计算机设备和存储介质
US10628079B1 (en) * 2016-05-27 2020-04-21 EMC IP Holding Company LLC Data caching for time-series analysis application
US11182097B2 (en) * 2019-05-14 2021-11-23 International Business Machines Corporation Logical deletions in append only storage devices
CN117009755A (zh) * 2023-10-07 2023-11-07 国仪量子(合肥)技术有限公司 波形数据的处理方法、计算机可读存储介质和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266605B1 (en) * 1998-12-17 2001-07-24 Honda Giken Kogyo Kabushiki Plant control system
US20070028070A1 (en) * 2005-07-26 2007-02-01 Invensys Systems, Inc. Method and system for time-weighted history block management
US20110153603A1 (en) * 2009-12-17 2011-06-23 Yahoo! Inc. Time series storage for large-scale monitoring system
US20110167486A1 (en) * 2010-01-05 2011-07-07 Kalyan Ayloo Client-side ad caching for lower ad serving latency
US20140172867A1 (en) * 2012-12-17 2014-06-19 General Electric Company Method for storage, querying, and analysis of time series data
US20140358968A1 (en) * 2013-06-04 2014-12-04 Ge Intelligent Platforms, Inc. Method and system for seamless querying across small and big data repositories to speed and simplify time series data access

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654855B1 (en) * 2000-10-26 2003-11-25 Emc Corporation Method and apparatus for improving the efficiency of cache memories using chained metrics
US7676288B2 (en) * 2006-06-23 2010-03-09 Invensys Systems, Inc. Presenting continuous timestamped time-series data values for observed supervisory control and manufacturing/production parameters
US9053038B2 (en) * 2013-03-05 2015-06-09 Dot Hill Systems Corporation Method and apparatus for efficient read cache operation
US9201800B2 (en) * 2013-07-08 2015-12-01 Dell Products L.P. Restoring temporal locality in global and local deduplication storage systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266605B1 (en) * 1998-12-17 2001-07-24 Honda Giken Kogyo Kabushiki Plant control system
US20070028070A1 (en) * 2005-07-26 2007-02-01 Invensys Systems, Inc. Method and system for time-weighted history block management
US20110153603A1 (en) * 2009-12-17 2011-06-23 Yahoo! Inc. Time series storage for large-scale monitoring system
US20110167486A1 (en) * 2010-01-05 2011-07-07 Kalyan Ayloo Client-side ad caching for lower ad serving latency
US20140172867A1 (en) * 2012-12-17 2014-06-19 General Electric Company Method for storage, querying, and analysis of time series data
US20140358968A1 (en) * 2013-06-04 2014-12-04 Ge Intelligent Platforms, Inc. Method and system for seamless querying across small and big data repositories to speed and simplify time series data access

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012095A1 (en) * 2014-07-11 2016-01-14 Canon Kabushiki Kaisha Information processing method, storage medium, and information processing apparatus
US10055443B2 (en) * 2014-07-11 2018-08-21 Canon Kabushiki Kaisha Information processing method, storage medium, and information processing apparatus
US10628079B1 (en) * 2016-05-27 2020-04-21 EMC IP Holding Company LLC Data caching for time-series analysis application
US11182097B2 (en) * 2019-05-14 2021-11-23 International Business Machines Corporation Logical deletions in append only storage devices
CN110633277A (zh) * 2019-08-13 2019-12-31 平安科技(深圳)有限公司 时序数据存储方法、装置、计算机设备和存储介质
CN117009755A (zh) * 2023-10-07 2023-11-07 国仪量子(合肥)技术有限公司 波形数据的处理方法、计算机可读存储介质和电子设备

Also Published As

Publication number Publication date
IN2014CH00896A (enrdf_load_stackoverflow) 2015-08-28
US10191848B2 (en) 2019-01-29
US10725921B2 (en) 2020-07-28
US20180260327A1 (en) 2018-09-13
US20190155734A1 (en) 2019-05-23

Similar Documents

Publication Publication Date Title
US10725921B2 (en) System and method for caching time series data
US9811577B2 (en) Asynchronous data replication using an external buffer table
US10275481B2 (en) Updating of in-memory synopsis metadata for inserts in database table
US10223287B2 (en) Method and system for cache management
US9251184B2 (en) Processing of destructive schema changes in database management systems
US20130254377A1 (en) Server and method for managing monitored data
US10761935B2 (en) Accelerating system dump capturing
US10671592B2 (en) Self-maintaining effective value range synopsis in presence of deletes in analytical databases
CN110874364B (zh) 一种查询语句处理方法、装置、设备及存储介质
US11036701B2 (en) Data sampling in a storage system
US20170075954A1 (en) Identification and elimination of non-essential statistics for query optimization
CN109597724B (zh) 服务稳定性测量方法、装置、计算机设备及存储介质
US11604803B2 (en) Net change mirroring optimization across transactions in replication environment
US10353930B2 (en) Generating a trust factor for data in non-relational or relational databases
US10262022B2 (en) Estimating database modification
US9172739B2 (en) Anticipating domains used to load a web page
US20190324921A1 (en) Optimizing cache performance with probabilistic model
US20140358878A1 (en) Maintaining Database Consistency When Nearing the End of a Database Recovery Log

Legal Events

Date Code Title Description
AS Assignment

Owner name: INMOBI PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JAYAPRAKASH, ARVIND;REEL/FRAME:045125/0853

Effective date: 20180228

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE