CN112035520A - Method for judging self-set delay repeatability of streaming data in real time - Google Patents

Method for judging self-set delay repeatability of streaming data in real time Download PDF

Info

Publication number
CN112035520A
CN112035520A CN201910478153.XA CN201910478153A CN112035520A CN 112035520 A CN112035520 A CN 112035520A CN 201910478153 A CN201910478153 A CN 201910478153A CN 112035520 A CN112035520 A CN 112035520A
Authority
CN
China
Prior art keywords
window
autocorrelation
delay
computation
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910478153.XA
Other languages
Chinese (zh)
Inventor
吕纪竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910478153.XA priority Critical patent/CN112035520A/en
Publication of CN112035520A publication Critical patent/CN112035520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24534Query rewriting; Transformation
    • G06F16/24535Query rewriting; Transformation of sub-queries or views
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24568Data stream processing; Continuous queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Image Processing (AREA)

Abstract

The autocorrelation of a given delay can be used to determine the repeatability of a given delay in a time series or stream data itself. The invention discloses a method, a system and a computing system program product for judging the repeatability of given delay of time series or stream data in real time by iteratively computing the autocorrelation of the given delay of a computing window of given scale. Embodiments of the present invention include iteratively calculating two or more components of the autocorrelation of the specified delay of the adjusted computation window based on the two or more components of the autocorrelation of the specified delay of the pre-adjustment computation window, and then generating the autocorrelation of the specified delay of the adjusted computation window based on the two or more components of the iterative computation as needed. The iterative computation autocorrelation avoids accessing all data elements in the computation window after adjustment and executing repeated computation, thereby improving the computation efficiency, saving the computation resources and reducing the energy consumption of a computation system, and making the high efficiency and the low consumption of the real-time judgment of the given delay repeatability of the streaming data and the possibility of some scenes of the real-time judgment of the given delay repeatability of the streaming data from being impossible.

Description

Method for judging self-set delay repeatability of streaming data in real time
Technical Field
Big data or streaming data analysis.
Background
Mass data are generated every day by the internet, mobile communication, navigation, online tour, sensing technology and large-scale computing infrastructure. Big data is data that exceeds the processing power of traditional database systems and the analysis power of traditional analysis methods due to its large size, rapid change and growth rate.
Streaming data is data that is continuously transmitted and continuously received by a provider. The streaming data may be real-time data gathered from the sensors and continuously transmitted to the computing or electronic device. Typically this involves receiving data elements of the same format that are successively divided by time intervals. The streaming data may also be data that is read continuously from the storage device, i.e. a large data set stored on one or more storage devices.
Autocorrelation, also known as delayed correlation or sequence correlation, is a measure of how well a particular time series correlates with the time series itself delayed by l time points. It can be obtained by dividing the co-correlation of observations of a time series separated by l time points by its standard deviation. The autocorrelation value of a certain delay of 1 or close to 1 may be considered that the stream data or the streaming large data has a self-repeating rule after the delay, and therefore, it is obvious to judge the repeatability of the given delay of the stream data itself based on the autocorrelation of the given delay, and the difficulty and challenge lies in how to calculate the autocorrelation on the stream data in real time.
Autocorrelation may need to be recalculated after some data change in the stream data to reflect the latest data condition. For example, it may be desirable to compute the autocorrelation for a computation window containing n data elements of a large data set that has been newly added to the storage medium, such that each time a data element is received or accessed, the data element is added to the computation window and the nth data element is removed from the computation window, the n data elements in the computation window are accessed to recalculate the autocorrelation. Thus, each data change may only change a small portion of the data in the calculation window. Recalculating the autocorrelation using all the data elements in the computation window involves repeating data accesses and computations, thus being time consuming and wasting resources.
Depending on the needs, the size of the computing window may be very large, e.g., the data elements in the computing window may be distributed across thousands of computing devices of the cloud platform. The traditional method is used for recalculating autocorrelation on the streaming data after data change, so that real-time processing cannot be realized, a large amount of computing resources are occupied and wasted, and the repeatability of real-time judgment of the given delay of the streaming data cannot be realized as required.
Disclosure of Invention
The present invention extends to methods, systems and computing device program products for computing the autocorrelation of a given delay of streaming data in an iterative manner so that the repeatability of the given delay of the streaming data itself can be determined in real time. Iteratively calculating the autocorrelation of the specified delay l (l >0) for one adjusted computation window comprises iteratively calculating two or more components of the autocorrelation of the specified delay of the adjusted computation window based on two or more (p (p >1)) components of the autocorrelation of the specified delay of the pre-adjusted computation window and then generating the autocorrelation of the specified delay of the adjusted computation window based on the two or more components of the iterative computation as needed. The iterative computation autocorrelation only needs to access and use the components of the iterative computation, newly added and removed data elements and each data element adjacent to the newly added and removed data elements on two sides of the computation window respectively, so that all data elements in the computation window after access adjustment are avoided, repeated computation is executed, data access delay is reduced, computation efficiency is improved, computation resources are saved, energy consumption of a computation system is reduced, and high efficiency and low consumption of delay repeatability given by streaming data in real time are realized, and scenes of judging delay repeatability given by the streaming data in real time are unlikely to become possible.
The computing system includes a buffer to store the stream data elements. This buffer may be in memory or other computer readable medium, such as a hard disk or other medium, or even distributed files distributed across multiple computing devices logically interconnected end-to-end to form a "circular buffer".
A calculation window is a moving window of stream data that contains data involved in the autocorrelation calculation. The calculation window may be moved to the left or right. For example, the computation window moves to the right when processing a real-time data stream. At this point, new data is added to the right of the computation window and the oldest data element to the left of the computation window is removed from the computation window. The calculation window moves to the left when the autocorrelation of the data elements of the previous data stream is recalculated. At this point, a data element is added to the left of the computation window and a data element to the right of the computation window is removed from the computation window. The goal is to iteratively compute the autocorrelation for a given delay of data elements in a computation window each time the computation window moves one or more data to the left or right. Both cases can be handled in the same way but only with different equations for the iterative calculations. By way of example and not limitation, in the following description an implementation of the present invention will be described and explained with the first case (calculation window moving to the right) as an example.
A computing system initiates two or more (p (p >1) components of an autocorrelation of a given delay l (l >0) of a pre-adjustment computation window of a data stream. The initialization of the two or more components includes computing the two or more components through a definition of the components based on the data elements in the pre-adjustment computing window or receiving or accessing the computed two or more components from a computing device readable medium.
The computing system receives a new data element.
The computing system saves the new data element into the input buffer.
The computing system adjusts the pre-adjustment computing window by removing the oldest data elements from the pre-adjustment computing window and adding the received data elements to the pre-adjustment computing window.
The computing system directly iterates one or more (let v (1 ≦ v ≦ p) components that compute the autocorrelation of the specified delay for the adjusted computation window. The direct iterative computation of the one or more components includes: accessing v components for a specified delay of a pre-adjustment computation window; mathematically removing the contribution of the removed data element from each component accessed; and mathematically adding the contribution of the added data element to each component accessed.
The computing system indirectly iteratively computes w-p-v components of the autocorrelation of the specified delay of the adjusted computation window as needed. Indirectly iteratively computing the w components of the specified delay includes indirectly iteratively computing each of the w components, one by one. One component for indirectly iteratively calculating the specified delay includes: one or more components other than the component that specify the delay are accessed and used to compute the component. The one or more components may be initialized, directly iteratively computed, or indirectly iteratively computed.
The computing system generates a delay-specified autocorrelation of the adjusted computation window based on one or more iteratively computed components of the delay-specified autocorrelation of the adjusted computation window.
The computing system may continuously receive a new data element, save the new data element into the input buffer, adjust the computation window, directly iteratively compute v (2 ≦ v ≦ p) components for the specified delay, indirectly iteratively compute w ≦ p-v components for the specified delay as needed, and compute the autocorrelation for the specified delay. The computing system may repeat this process as many times as necessary.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or from the practice of the invention.
Drawings
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention:
FIG. 1 illustrates a high-level overview of an example computing system that supports iteratively computing autocorrelation.
FIG. 1-1 shows an example computing system architecture that supports iterative computation of autocorrelation of streaming data and all components are computed in a direct iterative manner.
1-2 illustrate an example computing system architecture that supports iterative computation of autocorrelation of streaming data with some components computed in a direct iterative manner and some components computed in an indirect iterative manner.
FIG. 2 shows a flow diagram of an example method of iteratively calculating an autocorrelation of streaming data.
Fig. 3-1 shows the data removed and the data added as the calculation window 300A moves to the right.
Fig. 3-2 shows data accessed for iterative computation of autocorrelation as the computation window 300A moves to the right.
Fig. 3-3 show the data removed and the data added as the calculation window 300B moves to the left.
Fig. 3-4 show data accessed for iterative computation of autocorrelation as the computation window 300B moves to the left.
Fig. 4-1 shows the definition of the autocorrelation and the conventional equation for calculating the autocorrelation.
Fig. 4-2 shows a first autocorrelation iterative calculation algorithm (iterative algorithm 1).
Fig. 4-3 shows a second iterative computation algorithm for autocorrelation (iterative algorithm 2).
Fig. 4-4 show a third iterative computation algorithm for autocorrelation (iterative algorithm 3).
FIG. 5-1 shows a first calculation window for one example of a calculation.
Fig. 5-2 shows a second calculation window for one example of calculation.
Fig. 5-3 show a third calculation window for one example of calculation.
Fig. 6-1 shows a comparison of the computation of the conventional and iterative autocorrelation algorithms at a computation window size of 4 and a delay of 1.
Fig. 6-2 shows a comparison of the computation of the conventional and iterative autocorrelation algorithms at a computation window size of 1000000 with a delay of 1.
Detailed Description
Calculating autocorrelation is an effective method for judging the repeatability of a given delay of a time series or streaming big data. The present invention extends to methods, systems, and computing device program products for determining in real-time the repeatability of a given delay of streaming data itself by iterating the autocorrelation of the specified delay l (1 ≦ l < n) for a computational window of n (n >1) computational scale for the streaming data. A computing system includes one or more processor-based computing devices. Each computing device contains one or more processors. The computing system includes an input buffer. The input buffer holds large data or stream data elements. This buffer may be in memory or other computer readable medium, such as a hard disk or other medium, or even distributed files distributed across multiple computing devices logically interconnected end-to-end to form a "circular buffer". A plurality of data elements from the data stream that are involved in the autocorrelation calculation form a pre-alignment calculation window. The calculation window size n (n > l) indicates the number of data elements in a calculation window of the buffer. The delay l indicates the delay used in the autocorrelation calculation. Embodiments of the present invention include iteratively calculating two or more components of the autocorrelation of the specified delay of the adjusted computation window based on two or more (p (p >1)) components of the autocorrelation of the specified delay of the pre-adjusted computation window, and then generating the autocorrelation of the specified delay of the adjusted computation window based on the iteratively calculated two or more components as needed. The iterative computation autocorrelation avoids accessing all data elements in the computation window after adjustment and executing repeated computation, thereby improving the computation efficiency, saving the computation resources and reducing the energy consumption of a computation system, and making the high efficiency and the low consumption of the real-time judgment of the given delay repeatability of the streaming data and the possibility of some scenes of the real-time judgment of the given delay repeatability of the streaming data from being impossible.
Autocorrelation, also known as delayed correlation or sequence correlation, is a measure of how well a particular time series correlates with the time series itself delayed by l time points. It can be obtained by dividing the co-correlation of observations of a time series separated by l time points by its standard deviation. If the autocorrelation of all the different delay values of a time series is calculated, the autocorrelation function of the time series is obtained. For a time series that does not change over time, the autocorrelation value decreases exponentially to 0. The range of values of the autocorrelation is between-1 and + 1. A value of +1 indicates that the past and future values of the time series have a completely positive linear relationship, while a value of-1 indicates that the past and future values of the time series have a completely negative linear relationship.
In this context, a calculation window is a moving window of stream data that contains data involved in the autocorrelation calculation. The calculation window may be moved to the left or right. For example, the computation window moves to the right when processing a real-time data stream. At this point, new data is added to the right of the computation window and the oldest data element to the left of the computation window is removed from the computation window. The calculation window moves to the left when the autocorrelation of the data elements of the previous data stream is recalculated. At this point, a data element is added to the left of the computation window and a data element to the right of the computation window is removed from the computation window. The goal is to iteratively compute the autocorrelation for a given delay of data elements in a computation window each time the computation window moves one or more data to the left or right. Both cases can be handled in the same way but only with different equations for the iterative calculations. By way of example and not limitation, in the following description an implementation of the present invention will be described and explained with the first case (calculation window moving to the right) as an example.
In this context, a component of autocorrelation is a quantity or expression that appears in the autocorrelation definition formula or any transformation of its definition formula. Autocorrelation is its largest component. The following are some examples of auto-correlation components.
Figure BDA0002082952050000061
Figure BDA0002082952050000062
Figure BDA0002082952050000063
Figure BDA0002082952050000064
Figure BDA0002082952050000065
(l is delay)
The autocorrelation may be calculated based on one or more components or a combination thereof, so multiple algorithms support iterative autocorrelation calculations.
A component may be directly iteratively calculated or indirectly iteratively calculated. The difference is that when a component is directly iteratively calculated, the component is calculated by the value that the component calculated in the previous round, and when the component is indirectly iteratively calculated, the component is calculated by a component other than the component.
For a given component, it may be iteratively computed directly in one algorithm but indirectly in another algorithm.
For any algorithm, at least two components are iteratively computed, wherein one component is directly iteratively computed and the other component is directly or indirectly iteratively computed. For a given algorithm, assuming that the total number of different components used is p (p >1), if the number of components computed in direct iteration is v (1 ≦ v ≦ p), then the number of components computed in indirect iteration is w ≦ p-v (0 ≦ w < p). It is possible that all components are directly iteratively computed (in this case v ═ p >1 and w ═ 0). However, components of the direct iterative computation must be computed whether the result of the autocorrelation is needed and accessed in a particular round.
For a given algorithm, a component must be computed if it is directly computed iteratively (i.e., each time an existing data element is removed from the computation window and each time a data element is added to the computation window). However, if a component is indirectly iteratively computed, the component can be computed as needed by using one or more other components in addition to the component, i.e., only when the autocorrelation needs to be computed and accessed. Thus, when the autocorrelation is not accessed in a certain calculation round, only a small number of components need to be iteratively calculated. A component of an indirect iterative computation may be used for a direct iterative computation of a component, in which case the computation of the component of the indirect iterative computation may not be omitted.
Implementations of the invention include more than two (p (p >1)) components that iteratively calculate an autocorrelation of an adjusted computation window based on more than two (p (p >1)) components calculated for a previous computation window.
The computing system includes a buffer. The buffer holds large data or stream data elements. The calculation window size n (n > l) indicates the number of data elements in a calculation window of the buffer.
The computing system initializes two or more (p (p >1)) components of the autocorrelation of a given delay l (l ≧ 1) of the pre-adjustment computation window of a given scale n (n > 1). Initialization of the two or more components includes computing or accessing or receiving already computed components from one or more computing device readable media based on data elements in the computing window according to their definitions.
The computing system receives a new stream data element after receiving the one or more stream data elements.
The computing system saves the new data element to the buffer.
The computing system adjusts the pre-adjustment computing window by: removing the oldest data elements from the left side of the pre-adjustment computing window and adding new data elements to the right side of the pre-adjustment computing window.
The computing system iteratively computes a sum and/or an average or a sum and an average of the adjusted computation window.
The computing system directly iterates to compute one or more (let v (1 ≦ v < p) components other than the sum and average of the autocorrelation of the specified delay for the adjusted computation window. The direct iterative computation of v components at a given delay/, includes: accessing a removed data element, l data elements in the pre-adjustment computing window that are adjacent to the removed data element, an added data element, l data elements in the post-adjustment computing window that are adjacent to the added data element, and v components for a given delay l computed for the pre-adjustment computing window; mathematically removing any contribution of the removed data element from each component accessed; any contribution of the added data element is mathematically added to each component accessed.
The computing system indirectly iteratively computes w-p-v components of the autocorrelation for a given delay/as needed for the adjusted computation window. The w components that indirectly iteratively compute the autocorrelation of a given delay/comprise each of the w components that separately indirectly iteratively compute a given delay/. One component for indirectly iteratively calculating a given delay/comprises: one or more components of a given delay l outside of the component are accessed and computed based on the accessed components. One or more of these components of a given delay/may be initialized, directly iteratively computed, or indirectly iteratively computed.
The computing system generates an autocorrelation for the adjusted computation window for a given delay/based on one or more components of the autocorrelation for the given delay/iteratively computed for the adjusted computation window, as needed.
The computing system may continue to receive new data elements, save the data elements in the input buffer, adjust the computation window, iteratively compute one and or one average or one and one average of the adjusted computation window, directly iteratively compute one or more, i.e., v, components of a specified delay, indirectly iteratively compute w-p-v components of the specified delay as needed, generate an autocorrelation of the given delay based on the one or more iteratively computed components as needed, and repeat this process as many times as needed.
Embodiments of the present invention may include or utilize computing device hardware, such as one or more processors and storage devices described in greater detail below, special purpose or general computing devices. The scope of embodiments of the present invention also includes physical and other computing device-readable media for carrying or storing computing device-executable instructions and/or data structures. These computing device-readable media can be any media that can be accessed by a general purpose or special purpose computing device. A computing device readable medium storing instructions executable by a computing device is a storage medium (device). A computing device readable medium carrying computing device executable instructions is a transmission medium. Thus, by way of example, and not limitation, embodiments of the invention may include at least two different types of computing device-readable media: storage media (devices) and transmission media.
Storage media (devices) include Random Access Memory (RAM), read-only Memory (ROM), electrically erasable programmable read-only Memory (EEPROM), compact disc read-only Memory (CD-ROM), Solid State Disk (SSD), Flash Memory (Flash Memory), Phase Change Memory (PCM), other types of Memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computing device.
A "network" is defined as one or more data links that enable computing devices and/or modules and/or other electronic devices to transfer electronic data. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computing device, the computing device views the connection as a transmission medium. The transmission medium can include a network and/or data links which carry program code in the form of computing device-executable instructions or data structures and which are accessible by a general purpose or special purpose computing device. Combinations of the above should also be included within the scope of computing device readable media.
Further, program code in the form of computing device executable instructions or data structures can be transferred automatically from transmission media to storage media (devices) (or vice versa) when different computing device components are employed. For example, computing device executable instructions or data structures received from a network or data link may be staged into random access memory in a network interface module (e.g., a NIC) and then ultimately transferred to random access memory of the computing device and/or to a less volatile storage medium (device) of the computing device. It should be understood, therefore, that a storage medium (device) can be included in a computing device component that also (or even primarily) employs a transmission medium.
Computing device executable instructions include, for example, instructions and data which, when executed by a processor, cause a general purpose computing device or special purpose computing device to perform a certain function or group of functions. The computing device executable instructions may be, for example, binaries, intermediate format instructions such as assembly code, or even source code. Although the described objects have been described in language specific to structural features and/or methodological acts, it is to be understood that the objects defined in the appended claims are not necessarily limited to the features or acts described above. Rather, the described features and acts are disclosed only as examples of implementing the claims.
Embodiments of the invention may be practiced in network computing environments where many types of computing devices, including personal computers, desktop computers, notebook computers, information processors, hand-held devices, multi-processing systems, microprocessor-based or programmable consumer electronics, network computers, minicomputers, mainframe computers, supercomputers, mobile telephones, palmtops, tablets, pagers, routers, switches, and the like, may be deployed. Embodiments of the invention may also be practiced in distributed system environments where local and remote computing devices that perform tasks are interconnected by a network (i.e., either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links). In a distributed system environment, program modules may be stored in local or remote memory storage devices.
Embodiments of the invention may also be implemented in a cloud computing environment. In this description and in the following claims, "cloud computing" is defined as a model that enables on-demand access to a shared pool of configurable computing resources over a network. For example, cloud computing can be utilized by the marketplace to provide a shared pool of popular and convenient on-demand access to configurable computing resources. A shared pool of configurable computing resources can be provisioned quickly through virtualization and with low administrative overhead or low service provider interaction, and then adjusted accordingly.
The cloud computing model may include various features such as on-demand self-service, broadband network access, resource collection, fast deployment, metering services, and so forth. The cloud computing model may also be embodied in various service models, for example, software as a service ("SaaS"), platform as a service ("PaaS"), and infrastructure as a service ("IaaS"). The cloud computing model may also be deployed through different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth.
Since the invention effectively reduces the requirements on computing power, its embodiments are also applicable to edge computing.
In the present specification and claims, a "circular buffer" is a data structure that uses a single, seemingly end-to-end "buffer" of fixed length, sometimes referred to as a ring buffer. The "buffer" may be a conventional circular buffer, which is usually a block of space allocated in local memory, or a "virtual circular buffer", which is not necessarily in memory but rather a file on a hard disk or even a plurality of distributed files on a plurality of distributed computing devices as long as the distributed files are logically connected to each other to form a "circular buffer".
Typically, input data is added to a buffer of size n. When the buffer is not full of data, there are at least two approaches. One is not to perform the autocorrelation calculation until after the buffer is full of data and then calculate two or more components using the first n data according to the definition of the component. Alternatively, the autocorrelation may be incrementally calculated from the beginning, when desired, by the method described in another patent application by the present inventor, until the buffer is full. Once the buffer is full and the two or more components of the autocorrelation of the first n data are computed, the iterative algorithm provided herein can be used to iteratively compute the two or more components of the autocorrelation, which can then be computed based on the iteratively computed components.
Several examples will be given in the following section.
FIG. 1 illustrates a high-level overview of an example computing system 100 that iteratively computes autocorrelation for streaming data. Referring to fig. 1, computing system 100 includes a number of devices connected by different networks, such as local area network 1021, wireless network 1022, and internet 1023, among others. The plurality of devices include, for example, a data analysis engine 1007, a storage system 1011, a real-time data stream 1006, and a plurality of distributed computing devices such as a personal computer 1016, a handheld device 1017, and a desktop 1018, among others, that may schedule data analysis tasks and/or query data analysis results.
Data analysis engine 1007 can include one or more processors, such as CPU 1009 and CPU1010, one or more system memories, such as system memory 1008, and component calculation module 131 and autocorrelation calculation module 192. Details of module 131 are illustrated in greater detail in other figures (e.g., fig. 1-1 and 1-2). Storage system 1011 may include one or more storage media, such as storage media 1012 and storage media 1014, which may be used to store large data sets. For example, 1012 and or 1014 may include data set 123. The data sets in the storage system 1011 may be accessed by a data analysis engine 1007.
In general, data stream 1006 may include streaming data from various data sources, such as stock prices, audio data, video data, geospatial data, internet data, mobile communications data, web-surfing data, banking data, sensor data, and/or closed caption data, among others. Several of which are depicted here by way of example, real-time data 1000 may include data collected in real-time from sensors 1001, stocks 1002, correspondence 1003, banks 1004, and the like. The data analysis engine 1007 may receive data elements from the data stream 1006. Data from different data sources may be stored in storage system 1011 and accessed for big data analysis, e.g., data set 123 may be from different data sources and accessed for big data analysis.
It should be understood that fig. 1 presents some concepts in a very simplified form, for example, distribution devices 1016 and 1017 may be coupled to data analysis engine 1007 through a firewall, data accessed or received by data analysis engine 1007 from data stream 1006 and/or storage system 1011 may be filtered through a data filter, and so on.
Fig. 1-1 illustrates an example computing system architecture 100A in which all (v ═ p >1) components are iteratively computed directly for iterative computation of autocorrelation of streaming data. With respect to the computing system architecture 100A, only the functions and interrelationships of the main components of the architecture will be described, and the process of how these components cooperate to jointly perform the iterative autocorrelation calculation will be described later in conjunction with the flowchart depicted in fig. 2. Fig. 1-1 illustrates 1006 and 1007 shown in fig. 1. Referring to fig. 1-1, computing system architecture 100A includes a component computation module 131 and an autocorrelation computation module 192. The component computing module 131 may be tightly coupled to one or more storage media via a high-speed data bus or loosely coupled to one or more storage media managed by the storage system via a network, such as a local area network, a wide area network, or even the internet. Accordingly, component calculation module 131, and any other connected computing devices and their components, can send and receive message related data (e.g., internet protocol ("IP") datagrams and other higher layer protocols that use IP datagrams, such as, for example, user datagram protocol ("UDP"), real-time streaming protocol ("RTSP"), real-time transport protocol ("RTP"), microsoft media server ("MMS"), transmission control protocol ("TCP"), hypertext transfer protocol ("HTTP"), simple mail transfer protocol ("SMTP"), etc.) over a network. The output of the component calculation module 131 is taken as an input to an autocorrelation calculation module 192, and the autocorrelation calculation module 192 may generate an autocorrelation 193.
In general, the data stream 190 may be a sequence of digitally encoded signals (i.e., packets or data packets of data) used to transmit or receive, respectively, information during transmission. The data stream 190 may contain data from different categories, such as stock prices, audio data, video data, geospatial data, internet data, mobile communications data, web-tour data, banking data, sensor data, closed captioning data, real-time text, and the like. The data stream 190 may be a real-time stream or streamed stored data.
As stream data elements are received, the stream data elements may be placed in a circular buffer 121. For example, data element 101 is placed at position 121A, data element 102 is placed at position 121B, data element 103 is placed at position 121C, data element 104 is placed at position 121D, data element 105 is placed at position 121E, data element 106 is placed at position 121F, data element 107 is placed at position 121G, data element 108 is placed at position 121H, and data element 109 is placed at position 121I.
The data element 110 may then be received. Data element 110 may be placed at location 121A (covering data element 101).
As shown, the calculation window size is 8 (i.e., n-8) and the circular buffer 121 has 9 locations, 121A-121I. The data elements in the computation window may be switched as new data elements are put into the circular buffer 121. For example, when data element 109 is placed into location 121I, calculation window 122 may become calculation window 122A. When data element 110 is subsequently placed at location 121A, calculation window 122A becomes calculation window 122B.
Referring to computing system architecture 100A, typically component computation module 131 contains v (v ═ p) for a set of n data elements of a computation window for direct iterative computation>1) V component calculation modules for each component. v is the number of components that are directly iteratively computed in a given algorithm that iteratively computes the autocorrelation at a given delay, which varies with the iterative algorithm used. As shown in FIG. 1-1, the component calculation module 131 contains a component Cd1Calculation module 161 and a component CdvA calculation module 162, and v-2 other component calculation modules, which may be a component Cd, between them2Computing Module, component Cd3Calculation Module, … …, and component Cdv-1And a calculation module. Each component calculation module calculates a particular component for a given delay. Each component calculation module comprises an initialization module for initializing a component for a first calculation window and an algorithm for direct iterative calculation of the component for an adjusted calculation window. For example, component Cd1The calculation module 161 includes an initialization module 132 to initializeComponent Cd for a given delay1And an iterative algorithm 133 to iteratively calculate a component Cd for a given delay1Component CdvThe calculation module 162 comprises an initialization module 138 to initialize a component Cd of a given delayvAnd an iterative algorithm 139 to iteratively calculate a component Cd for a given delayv
The initialization module 132 may initialize the component Cd1When used or when the autocorrelation calculations are reset. Likewise, the initialization module 138 may initialize the component CdvWhen used or when the autocorrelation calculations are reset.
Referring to fig. 1-1, the computing system architecture 100A also includes an autocorrelation calculation module 192. The autocorrelation calculation module 192 may calculate 193 an autocorrelation for a given delay based on one or more iteratively calculated components of the given delay as needed.
FIGS. 1-2 illustrate iterative computation of autocorrelation and partial (v (1 ≦ v) for a data stream<p)) component direct iterative computation, and part (w ═ p-v) component indirect iterative computation, computing system architecture 100B. In some implementations, the difference between computing system architectures 100B and 100A is that architecture 100B includes a component computing module 135. Otherwise, the same reference numerals as in 100A are used in the same manner. So as not to repeat what was previously explained in the description of 100A, only the different parts will be discussed here. The number v in 100B may be different from the number v in 100A because some components in 100A that are directly iteratively computed are indirectly iteratively computed in 100B. In 100A, v ═ p>1, but in 100B, 1. ltoreq. v<p is the same as the formula (I). Referring to fig. 1-2, the computing system architecture 100B includes a component calculation module 135. The output of component calculation module 131 may be input to component calculation module 135, the outputs of calculation modules 131 and 135 may be input to autocorrelation calculation module 192, and autocorrelation calculation module 192 may generate autocorrelation 193. The component calculation module 135 typically includes a w-p-v component calculation module to indirectly iteratively calculate w components. For example, the component calculation module 135 includes a component calculation module 163 for indirectly iteratively calculating the components Ci1The component calculation module 164 is used for indirect iterative calculation of the components CiwAnd a process for their preparationOther w-2 component calculation modules in between. Indirectly iteratively computing w components includes indirectly iteratively computing each of the w components one by one. Indirect iterative computation of a component includes accessing and using one or more components other than the component itself. The one or more components may be initialized, directly iteratively calculated, or indirectly iteratively calculated.
FIG. 2 illustrates a flow diagram of an example method 200 of iteratively computing an autocorrelation for a streamed large data set or data stream. The method 200 will be described in conjunction with the components and data of the computing system architectures 100A and 100B, respectively.
The method 200 includes initializing a streaming large data set or stream of a specified size n (n)>1) Is given a delay of l (0)<l<n) is independent of v (1. ltoreq. v. ltoreq.p, p)>1) An assembly (201). For example, for computing devices 100A and 100B, method 200 may access and initialize v components of a computation window stored on circular buffer 121 according to the definition of the components using stream data elements stored in the computation window that are received and stored in the circular buffer in chronological order. For example, data element 101 is received and saved earlier than 102. The component calculation module 131 may access the data elements 101, 102, 103, 104, 105, 106, 107, and 108 in the calculation window 122 on the buffer. The initialization module 132 may initialize a component Cd of a given delay with data elements 101 to 1081141. As shown, component Cd1141 contain contributions 151, 152 and other contributions 153. Contribution 151 is the component Cd of the data element 101 to a given delay1141 of the optical fiber. The contribution 152 is the component Cd of the data element 102 for a given delay1141 of the optical fiber. The other contribution 153 is the component Cd of the data elements 103 to 108 for a given delay1141 of the optical fiber. Likewise, the initialization module 138 may initialize a component Cd of a given delay with 101 to 108v145. As shown, component Cdv145 includes contribution 181, contribution 182, and other contributions 183. Contribution 181 is the component Cd of the data element 101 for a given delayv145. Contribution 182 is the component Cd of the data element 102 for a given delayv145. Other contributions183 is a component Cd of the data elements 103 to 108 for a given delayv145.
The method 200 includes when v is<p, i.e., not all components are directly iteratively computed, w-p-v components with delay/are computed by one indirect iteration, as needed, with one or more other components than the components themselves. The w components are computed (209) only when the autocorrelation is accessed. For example, referring to FIGS. 1-2 where some components are iteratively computed directly and some components are iteratively computed indirectly, computation module 163 can be based on components Ci1One or more components other than Ci to indirectly iteratively compute component Ci1The calculation module 164 can be based on the components CiwOne or more components other than Ci to indirectly iteratively compute component Ciw. The one or more components may be initialized, directly iteratively calculated, or indirectly iteratively calculated.
Method 200 includes computing an autocorrelation with a delay of l with one or more initialized or iteratively computed components with a delay of l as needed (210). When the autocorrelation is accessed, the autocorrelation is computed based on one or more iteratively computed components, otherwise only those v components are iteratively computed.
The method 200 includes receiving a data element and saving the received data element to a buffer (202). For example, referring to 100A and 100B, data element 109 may be received after data elements 101-108 are received. The data element 109 may be stored in the circular buffer 121 at the 121I position.
The method 200 includes adjusting a computation window, including: the oldest received data elements are removed from the computation window and the newly received data elements are added to the computation window (203). For example, data element 101 is removed from the calculation window 122, data element 109 is added to the calculation window 122, and then the calculation window 122 transitions to the adjusted calculation window 122A.
The method 200 includes iteratively calculating v components of an autocorrelation with a delay of l directly for an adjusted calculation window (204), including: accessing (205) the l data elements in the computation window adjacent to the removed data element and the l data elements adjacent to the added data element; accessing v components of an autocorrelation with a delay of l (206); mathematically removing any contribution of the removed data element from each of the v components (207); and mathematically adding any contribution of the added data element to each of the v components (208). The details are described below.
Iteratively calculating v components of the autocorrelation specifying a delay/directly for the adjusted computation window includes accessing l data elements adjacent to the removed data element and l data elements adjacent to the added data element in the computation window (205). For example, if the specified delay l is 1, the iterative algorithm 133 may access data element 102 that is adjacent to the removed data element 101 and data element 108 that is adjacent to the added data element 109. If the specified delay l is 2, the iterative algorithm 133 may access … … data elements 102 and 103 that are adjacent to the removed data element 101 and data elements 107 and 108 that are adjacent to the added data element 109. Similarly, if the specified delay l is 1, the iterative algorithm 139 may access data element 102 which is adjacent to the removed data element 101 and data element 108 which is adjacent to the added data element 109. If the specified delay l is 2, the iterative algorithm 139 may access … … data elements 102 and 103 that are adjacent to the removed data element 101 and data elements 107 and 108 that are adjacent to the added data element 109.
Iteratively calculating v components of the autocorrelation with delay l directly for the adjusted computation window includes accessing v (1 ≦ v ≦ p) components of the autocorrelation with delay l for the pre-adjusted computation window (206). For example, if the specified delay l is 1, the iterative algorithm 133 may access a component Cd having a delay of 11141, if the specified delay l is 2, the iterative algorithm 133 may access a component Cd with a delay of 21141 … …. Similarly, if the specified delay l is 1, the iterative algorithm 139 may access a component Cd having a delay of 1v145, if the specified delay l is 2, the iterative algorithm 139 may access the component Cd with delay 2v 145……。
Iteratively calculating the v components of the autocorrelation specifying the delay/directly for the adjusted computation window includes mathematically removing any contribution of the removed data elements from each of the v components (207). For example, if the specified delay l is 2, the delay is calculated iteratively and directlyLate 2 component Cd1143 may include a component Cd with a contribution removal module 133A from delay 21141 to mathematically remove the contribution 151. Similarly, a component Cd with a delay of 2 is directly iteratively calculatedv147 can include a component Cd from delay 2 for contribution removal module 139Av145 mathematically removes the contribution 181. Contributions 151 and 181 come from data element 101.
Iteratively calculating the v components of the autocorrelation with delay/directly for the adjusted computation window includes mathematically adding any contribution of the added data elements from each of the v components (208). For example, if the specified delay l is 2, the component Cd with delay 2 is directly iteratively calculated1143 may include a contribution addition module 133B to a component Cd having a delay of 21141 to add the contribution 154 mathematically. Similarly, a component Cd with a delay of 2 is directly iteratively calculatedv147 can include a contribution addition module 139B to a component Cd with a delay of 2v145 to mathematically add a contribution 184. Contributions 154 and 184 are from data element 109.
As shown in FIGS. 1-1 and 1-2, component Cd1143 includes contribution 152 (the contribution from data element 102), other contributions 153 (the contribution from data elements 103 and 108), and contribution 154 (the contribution from data element 109). Similarly, component Cd v147 includes a contribution 182 (from data element 102), other contributions 183 (from data elements 103 and 108), and a contribution 184 (from data element 109).
When autocorrelation is accessed and v<p (i.e., not all components are directly iteratively computed), method 200 includes computing w-p-v components with a delay of l by one indirect iteration as needed with one or more other components other than the components themselves (209). The w components are only computed when the autocorrelation is accessed. For example, referring to FIGS. 1-2, where some components are directly iteratively computed, some components are indirectly iteratively computed, and computation module 163 may be based on components Ci1One or more components other than Ci to indirectly iteratively compute component Ci1The calculation module 164 can be based on the components CiwOne or more components other than Ci to indirectly iteratively compute component Ciw. The one or more components may be initialized, directly iteratively calculated, or indirectly iteratively calculated.
The method 200 includes calculating an autocorrelation on an as needed basis. When the autocorrelation is accessed, an autocorrelation is computed based on one or more iteratively computed components; otherwise only v components are directly iteratively computed. When autocorrelation is accessed, method 200 includes w components that can indirectly iteratively compute a delay of/as needed (209). For example, in architecture 100A, autocorrelation module 192 may compute autocorrelation 193 for a given delay. In architecture 100B, computation module 163 can be based on components Ci1One or more components outside of the indirect iterative computation of Ci1And the calculation module 164 can be based on the component CiwOne or more components outside of the indirect iterative computation of Ciw… …, the autocorrelation calculation module 192 may calculate the autocorrelation 193(210) for a given delay. Once the autocorrelation for a given delay is computed, the method 200 includes receiving the next stream data element and starting the next iteration. And every time a new round of iterative computation is started, the adjusted computation window of the previous round becomes the pre-adjustment computation window of the new round of iterative computation.
As more data element accesses 202 and 208 may be repeated, 209 and 210 may be repeated as needed. For example, a data element 109 is received and a component Cd1143 to component CdvAfter the components within 147 range are computed, data element 110 may be received (202). Once a new data element is received, method 200 includes adjusting the computation window by removing the oldest received data element from the computation window and adding the newest received data element to the computation window (203). For example, data element 110 may be placed at position 121A over data element 101. The calculation window 122A may transition to the calculation window 122B after removing the data element 102 and adding the data element 110.
The method 200 includes iteratively calculating directly for the adjusted computation window, based on the v components of the pre-adjustment computation window, v components of the autocorrelation having a delay of l (204), including accessing l data elements in the computation window adjacent to the removed data element and the added data elementThe method includes the steps of (205) accessing v components of the data elements that are adjacent to the element and (206) mathematically removing any contribution of the removed data elements from each of the v components (207), and mathematically adding any contribution of the added data elements to each of the v components (208). For example, referring to 100A and 100B, at a specified delay, such as l-1, the iterative algorithm 133 may be used to directly iteratively compute a component Cd having a delay of 1 for the computation window 122B 1144 based on a component Cd having a delay of 1 calculated for the calculation window 122A1143(204). The iterative algorithm 133 may access 205 data element 103 that is adjacent to the removed data element 102 and data element 109 that is adjacent to the added data element 110. The iterative algorithm 133 may access a component Cd with a delay of 11143(206). Component Cd with direct iterative computation delay of 11144 includes a component Cd having a delay of 1 from contribution removal module 133A1143, the contribution 152, i.e., the contribution of the data element 102, is mathematically removed (207). Component Cd with direct iterative computation delay of 11144 includes a contribution adding module 133B to a component Cd having a delay of 11143 to the contribution 155, i.e., the contribution of the data element 110 (208). Similarly, at a specified delay, such as l ═ 1, the iterative algorithm 139 can be used to directly iterate the computation window 122B for computing the component Cd having a delay of 1v148 based on a component Cd with a delay of 1 calculated for the calculation window 122A v147. The iterative algorithm 139 may access data element 103 which is adjacent to the removed data element 102 and data element 109 which is adjacent to the added data element 110. The iterative algorithm 139 may access a component Cd with a delay of 1v147. Component Cd with direct iterative computation delay of 1v148 includes a component Cd from delay 1 of contribution removal module 139A v147, the contribution 182, i.e., the contribution of the data element 102, is mathematically removed. Component Cd with direct iterative computation delay of 1v148 includes a contribution addition module 139B to a component Cd having a delay of 1v147 to mathematically add contribution 185, i.e., the contribution of data element 110.
As shown, component Cd with delay l 1144 include other contributions 153 (from data elements 103-108), contributions 154 (from data elements 109),and contribution 155 (contribution from data element 110), component Cd with delay of l v148 include other contributions 183 (from data elements 103-108), contributions 184 (from data element 109), and contributions 185 (from data element 110).
The method 200 includes indirectly iteratively calculating w components and autocorrelation for a given delay as needed.
The method 200 includes indirectly iteratively calculating w components and auto-correlations for a given delay as needed, i.e., only the auto-correlation is accessed. If the autocorrelation is not to be accessed, the method 200 includes continuing to receive a next data element to be added for a next computation window (202). If autocorrelation is accessed, method 200 includes indirectly iteratively computing w components for a given delay (209), computing autocorrelation for the given delay based on one or more iteratively computed components for the given delay (210).
When the next stream data element is accessed, the component Cd 1144 can be used to directly iterate the calculation of the next component Cd1Component Cdv148 can be used to directly iterate the calculation of the next component Cdv
Fig. 3-1 illustrates data elements removed from the computation window 300A and data elements added to the computation window 300A when iteratively computing an autocorrelation on stream data. The calculation window 300A is moved to the right. Referring to fig. 3-1, an existing data element is always removed from the left side of the computation window 300A, and a data element is always added to the right side of the computation window 300A.
Fig. 3-2 illustrates data accessed in the computation window 300A when an autocorrelation is iteratively computed on the stream data. For computation window 300A, the first n data elements are accessed to initialize two or more components for a given delay for the first computation window, and then indirectly iteratively compute w-p-v components and auto-correlations as needed. Over time, an oldest data element, e.g., (m +1) th data element, is removed from the computation window 300A and a data element, e.g., (m + n +1) th data element, is added to the computation window 300A. The one or more components for a given delay of the adjusted computation window are then directly iteratively computed based on the two or more components computed for the first computation window. If the specified latency is 1, a total of 4 data elements are accessed, including the removed data element, a data element adjacent to the removed data element, the added data element, and a data element adjacent to the added data element. If the specified latency is 2, a total of 6 data elements are accessed, including the removed data element, 2 data elements adjacent to the removed data element, the added data element, and 2 data elements adjacent to the added data element. If the specified delay is l, then a total of 2 x (l +1) data elements are accessed, including the removed data element, the l data elements adjacent to the removed data element, the added data element, and the l data elements adjacent to the added data element. The w-p-v components and autocorrelation for a given delay are then iteratively computed indirectly as needed. The calculation window 300A is then adjusted again by removing an old data element and adding a new data element, … …. For a given iterative algorithm, v is a constant and the operands of the indirect iterations w ═ p-v components are also a constant, so the amount of data access and computation is reduced and constant for a given latency. The larger the calculation window size n, the more significant the reduction in the amount of data access and calculation.
Fig. 3-3 illustrate data elements removed from the computation window 300B and data elements added to the computation window 300B when iteratively computing an autocorrelation on stream data. The calculation window 300B is moved to the left. Referring to fig. 3-3, a new data element is always removed from the right of the computation window 300B and an old data element is always added to the left of the computation window 300B.
Fig. 3-4 illustrate data accessed from the computation window 300B when iteratively computing an autocorrelation on stream data. For computation window 300B, the first n data elements are accessed to initialize two or more components for a given delay for the first computation window, and then indirectly iteratively compute w-p-v components and auto-correlations as needed. Over time, a data element, such as the (m + n) th data element, is removed from the computation window 300B and a data element, such as the m data element, is added to the computation window 300B. The number of data elements that need to be accessed is the same as described in fig. 3-2, except that the direction of the computational window movement is different.
Fig. 4-1 illustrates the definition of autocorrelation. Let X be (X)m+1,xm+2,……,xm+n) Is a calculation window of size n of a data stream containing data relating to autocorrelation calculations. The calculation window may be moved in both the right and left directions. For example, when the autocorrelation of the latest data is to be calculated, the calculation window is shifted to the right. At this point, one data is removed from the left side of the calculation window and one data is added to the right side of the calculation window. When the autocorrelation of old data is to be reviewed, the calculation window is moved to the left. At this point, one data is removed from the right side of the calculation window and one data is added to the left side of the calculation window. The equations for the iterative computation component are different in the two cases. To distinguish them, the adjusted calculation window of the former case is defined as XIThe calculation window is X after the latter condition is adjustedII. Equations 401 and 402 are the sum S of all data elements in calculation window X of size n for the k-th round, respectivelykAnd average value
Figure BDA0002082952050000191
The conventional equation of (c). Equation 403 is the autocorrelation ρ for a given delay of l for the kth calculation window X(k,l)The conventional equation of (c). Equations 404 and 405 are the adjusted calculation windows X of scale n for the k +1 th round, respectivelyISum of all data elements SI k+1And average value
Figure BDA0002082952050000192
The conventional equation of (c). Equation 406 is the adjusted calculation window X for the k +1 th calculationIGiven delay of l is the autocorrelation ρI (k+1,l)The conventional equation of (c). As mentioned above, when the calculation window is shifted to the left, the adjusted calculation window is defined as XII. Equations 407 and 408 are the adjusted calculation windows X of scale n for the k +1 th round, respectivelyIITotaling of all data elements inAnd SII k+1And average value
Figure BDA0002082952050000193
The conventional equation of (c). Equation 409 is the adjusted calculation window X for the k +1 th calculationIIGiven delay of l is the autocorrelation ρII (k+1,l)The conventional equation of (c).
To illustrate how the autocorrelation is iteratively calculated using components, three different iterative autocorrelation algorithms are provided as examples. A new round of computation begins each time there is a data change in the computation window (e.g., 122 → 122A → 122B). One and or the average is the basic component for calculating the autocorrelation. The equation for iteratively calculating a sum or average is the iterative component equation used by all example iterative autocorrelation calculation algorithms.
Fig. 4-2 illustrates a first example iterative autocorrelation calculation algorithm (iterative algorithm 1). Equations 401 and 402 may be used to initialize component S, respectivelykAnd/or
Figure BDA0002082952050000194
Equations 410,411, and 412 may be used to initialize component SS, respectivelyk,SXkAnd covX(k,l). Equation 413 may be used to calculate the autocorrelation ρ(k,l). The iterative algorithm 1 comprises a component S when the calculation window moves to the rightI k+1Or
Figure BDA0002082952050000195
SSI k+1,SXI k+1And covXI (k+1,l)Iterative calculation of (4), once the component SX is assembledI k+1And covXI (k+1,l)Is calculated, autocorrelation rhoI (k+1,l)May be calculated based on them. Once the component SkAnd/or
Figure BDA0002082952050000196
As applicable, equations 414 and 415 may be used to iteratively calculate the adjusted calculation window X, respectivelyIComponent S ofI k+1And
Figure BDA0002082952050000197
once assembly SSkAvailable, equation 416 can be used to directly iteratively calculate the adjusted calculation window XIModule SSI k+1. Once the component SI k+1Or
Figure BDA0002082952050000198
And SSI k+1Available, equation 417 can be used to indirectly iteratively calculate the adjusted calculation window XIComponent SXI k+1. Once assembly covX(k,l),SSI k+1,SkOr
Figure BDA0002082952050000199
And SI k+1Or
Figure BDA00020829520500001910
If applicable, equation 418 can be used to directly iteratively calculate the adjusted calculation window XIAssembly covX ofI (k+1,l). 414, 415, 417, and 418 each contain a plurality of equations but only one of them needs to be dependent on whether or not and average or both are available. Once assembly covXI (k+1,l)And SXI k+1Calculated, equation 419 may be used to indirectly iteratively calculate the adjusted calculation window XIGiven delay of l is the autocorrelation ρI (k+1,l). When the calculation window is moved to the left, the iterative algorithm 1 comprises a component SII k+1Or
Figure BDA0002082952050000201
SSII k+1,SXII k+1And covXII (k+1,l)Iterative calculation of (4), once the component SX is assembledII k+1And covXII (k+1,l)Is calculated, autocorrelation rhoII (k+1,l)May be calculated based on them. Equations 420 and 421 may be used to iteratively calculate the adjusted calculation window X, respectivelyIIComponent S ofII k+1And
Figure BDA0002082952050000202
once the component SkAnd/or
Figure BDA0002082952050000203
Can be used. Equation 422 can be used to directly iterate the calculation window X after adjustmentIIModule SSII k+1Once assembly SSkCan be used. 423 can be used for indirectly iteratively calculating the adjusted calculation window XIIComponent SXII k+1Once the component SII k+1Or
Figure BDA0002082952050000204
And SSII k+1Can be used. Equation 424 can be used to directly iteratively calculate the adjusted calculation window XIIAssembly covX ofII (k+1,l)Once assembly covX(k,l),SSII k+1,SkOr
Figure BDA0002082952050000205
And SII k+1Or
Figure BDA0002082952050000206
Can be used. 420, 421, 423, and 424 each contain multiple equations but only one of them is needed depending on whether or not and average or both are available. Equation 425 may be used to indirectly iteratively calculate the adjusted calculation window XIIGiven delay of l is the autocorrelation ρII (k+1,l)Once assembly covXII (k+1,l)And SXII k+1Is calculated.
Fig. 4-3 illustrate a second example iterative autocorrelation calculation algorithm (iterative algorithm 2). Equations 401 and 402 may be used to initialize component S, respectivelykAnd/or
Figure BDA0002082952050000207
Equations 426 and 427 may be used to initialize component SX, respectivelykAnd covX(k,l). Equation 428 may be used to calculate the autocorrelation ρ(k,l). The iterative algorithm 2 comprises a component S when the calculation window moves to the rightI k+1Or
Figure BDA0002082952050000208
SXI k+1And covXI (k+1,l)Iterative calculation of (4), once the component SX is assembledI k+1And covXI (k+1,l)Is calculated, autocorrelation rhoI (k+1,l)May be calculated based on them. Once the component SkAnd/or
Figure BDA0002082952050000209
As can be seen, equations 429 and 430 may be used to iteratively calculate the adjusted calculation window X, respectivelyIComponent S ofI k+1And
Figure BDA00020829520500002010
once assembly SXk,SIk+1And/or
Figure BDA00020829520500002011
Available, equation 431 can be used to directly iteratively calculate the adjusted calculation window XIComponent SXI k+1. Equation 432 can be used to directly iterate the calculation window X after adjustmentIAssembly covX ofI (k+1,l)Once assembly covX(k,l),SkOr
Figure BDA00020829520500002012
And SI k+1Or
Figure BDA00020829520500002013
Can be used. 429, 430,431, and 432 each contain a plurality of equations but only one of which is required depending on whether or not and average or both are available. Once assembly covXI (k+1,l)And SXI k+1Calculated, equation 433 may be used to indirectly iteratively calculate the adjusted calculation window XIGiven delay of l is the autocorrelation ρI (k+1,l). When the calculation window is to the leftMoving, the iterative algorithm 2 includes a component SII k+1Or
Figure BDA00020829520500002014
SXII k+1And covXII (k+1,l)Iterative calculation of (4), once the component SX is assembledII k+1And covXII (k+1,l)Is calculated, autocorrelation rhoII (k+1,l)Can be calculated based on them. Equations 434 and 435, respectively, may be used to iteratively calculate the adjusted calculation window XIIComponent S ofII k+1And
Figure BDA0002082952050000211
once the component SkAnd/or
Figure BDA0002082952050000212
Can be used. Equation 436 can be used to directly iteratively calculate the adjusted calculation window XIIComponent SXII k+1Once assembly SXII k,SII k+1And/or
Figure BDA0002082952050000213
Can be used. Equation 437 can be used to directly iterate the calculation window X after adjustmentIIAssembly covX ofII (k+1,l)Once assembly covX(k,l),SkOr
Figure BDA0002082952050000214
And SII k+1Or
Figure BDA0002082952050000215
Can be used. 434, 435, 436, and 437 each contain multiple equations but only one of them needs to be dependent on whether or not and average or both are available. Equation 438 can be used to indirectly iterate the calculation window X after adjustmentIIGiven delay of l is the autocorrelation ρII (k+1,l)Once assembly covXII (k+1,l)And SXII k+1Is calculated.
Fig. 4-4 illustrate a third example iterative autocorrelation calculation algorithm (iterative algorithm 3). Equations 401 and 402 may be used to initialize component S, respectivelykAnd/or
Figure BDA0002082952050000216
Equations 439 and 440 may be used to initialize component SX, respectivelykAnd covX(k,l). Equation 441 may be used to calculate the autocorrelation ρ(k,l). The iterative algorithm 3 comprises a component S when the calculation window moves to the rightI k+1Or
Figure BDA00020829520500002120
SXI k+1And covXI (k+1,l)Iterative calculation of (4), once the component SX is assembledI k+1And covXI (k+1,l)Is calculated, autocorrelation rhoI (k+1,l)May be calculated based on them. Equations 442 and 443 can be used to iteratively calculate the adjusted calculation window X, respectivelyIComponent S ofI k+1And
Figure BDA0002082952050000217
once the component SkAnd/or
Figure BDA0002082952050000218
Can be used. Equation 444 may be used to directly iteratively calculate the adjusted calculation window XIComponent SXI k+1Once assembly SXk,SkAnd/or
Figure BDA0002082952050000219
And SI k+1And/or
Figure BDA00020829520500002110
Can be used. Equation 445 can be used to directly iterate the calculation of the adjusted calculation window XIAssembly covX ofI (k+1,l)Once assembly covX(k,l),SkOr
Figure BDA00020829520500002111
And SI k+1Or
Figure BDA00020829520500002112
Can be used. 442,443,444, and 445 each contain multiple equations but only one of them needs to be dependent on whether or not and average or both are available. Equation 446 may be used to indirectly iteratively calculate the adjusted calculation window XIGiven delay of l is the autocorrelation ρI (k+1,l)Once assembly covXI (k+1,l)And SXI k+1Is calculated. When the calculation window is moved to the left, the iterative algorithm 3 comprises a component SII k+1Or
Figure BDA00020829520500002113
SXII k+1And covXII (k+1,l)Iterative calculation of (4), once the component SX is assembledII k+1And covXII (k+1,l)Is calculated, autocorrelation rhoII (k+1,l)Can be calculated based on them. Equations 447 and 448 may be used to iteratively calculate the adjusted calculation window X, respectivelyIIComponent S ofII k+1And
Figure BDA00020829520500002114
once the component SkAnd/or
Figure BDA00020829520500002115
Can be used. Equation 449 may be used to directly iteratively calculate the adjusted calculation window XIIComponent SXII k+1Once assembly SXk,SkAnd/or
Figure BDA00020829520500002116
And SII k+1And/or
Figure BDA00020829520500002117
Can be used. Equation 450 can be used to directly iterate the calculation window X after adjustmentIIAssembly covX ofII (k+1,l)Once assembly covX(k,l),SkOr
Figure BDA00020829520500002118
And SII k+1Or
Figure BDA00020829520500002119
Can be used. 447, 448, 449, and 450 each contain multiple equations but only one of them is needed depending on whether or not and average or both are available. Once assembly covXII (k+1,l)And SXII k+1Calculated, equation 451 can be used to indirectly iteratively calculate the adjusted calculation window XIIGiven delay of l is the autocorrelation ρII (k+1,l)
To illustrate the iterative autocorrelation algorithms and their comparison with conventional algorithms, three examples are given below. Data for 3 mobile computing windows are used. For the conventional algorithm, the calculation process is identical for all 3 calculation windows. For an iterative algorithm, a first computation window performs initialization of two or more components, and second and third computation windows perform iterative computations.
FIGS. 5-1, 5-2, and 5-3 show a first calculation window, a second calculation window, and a third calculation window, respectively, for a calculation example. The computation window 503 comprises the first 4 data elements of the data stream 501: 8,3,6,1. The computation window 504 includes 4 data elements of the data stream 501: 3,6,1,9. The computation window 505 comprises 4 data elements of the data stream 501: 6,1,9,2. The example of the calculation assumes that the calculation window moves from left to right. The data stream 501 may be streamed large data or stream data. The calculation window size 502(n) is 4.
The autocorrelation with a delay of 1 is first calculated for computation windows 503, 504, and 505, respectively, using conventional algorithms.
An autocorrelation with a delay of 1 is calculated for the calculation window 503:
Figure BDA0002082952050000221
Figure BDA0002082952050000222
Figure BDA0002082952050000223
Figure BDA0002082952050000224
without any optimization, the autocorrelation with a delay of 1 is calculated for a calculation window of size 4 for a total of 2 divisions, 7 multiplications, 8 additions, and 10 subtractions.
The same equations and processes can be used to calculate the autocorrelation with a delay of 1 for the calculation window 504 shown in fig. 5-2 and the autocorrelation with a delay of 1 for the calculation window 505 shown in fig. 5-3, respectively. Computing window 504 delay 1 autocorrelation
Figure BDA0002082952050000225
Computing window 505 autocorrelation with delay of 1
Figure BDA0002082952050000226
Figure BDA0002082952050000227
Each of these two calculations includes 2 divisions, 7 multiplications, 8 additions and 10 subtractions without optimization. Conventional algorithms typically require 2 divisions, 2n-l multiplications, 3n- (l +3) additions, and 3n-2l subtractions to be performed without optimization to calculate an autocorrelation with a computation window size of n given a delay of l.
The autocorrelation with a delay of 1 is calculated for computation windows 503, 504, and 505, respectively, using iterative algorithm 1.
An autocorrelation with a delay of 1 is calculated for the calculation window 503:
1. initializing the 1 st round assembly with equations 402,410,411, and 412, respectively
Figure BDA0002082952050000231
SS1,SX1And covX(1,1)
Figure BDA0002082952050000232
Figure BDA0002082952050000233
Figure BDA0002082952050000234
Figure BDA0002082952050000235
2. Calculate the autocorrelation ρ of round 1 using equation 413(1,1)
Figure BDA0002082952050000236
There are 2 divisions, 9 multiplications, 8 additions, and 7 subtractions in calculating the autocorrelation with a delay of 1 for the calculation window 503.
An autocorrelation with a delay of 1 is calculated for the calculation window 504:
1. iteratively calculating the round 2 components using equations 415,416,417, and 418, respectively
Figure BDA0002082952050000237
SS2,SX2And covX(2,1)
Figure BDA0002082952050000238
SS2=SS1+xm+1+4 2-xm+1 2=110+92-82=110+81-64=127
Figure BDA0002082952050000239
Figure BDA00020829520500002310
2. Calculate the autocorrelation ρ for round 2 using equation 419(2,1)
Figure BDA00020829520500002311
The computation window 504 iteratively computes the autocorrelation with a delay of 1 for a total of 2 divisions, 10 multiplications, 8 additions, and 7 subtractions.
An autocorrelation with a delay of 1 is calculated for the calculation window 505:
1. iteratively calculating the 3 rd round components using equations 415,416,417, and 418, respectively
Figure BDA0002082952050000241
SS3,SX3And covX(3,1)
Figure BDA0002082952050000242
SS3=SS2+xm+1+4 2-xm+1 2=127+22-32=127+4-9=122
Figure BDA0002082952050000243
Figure BDA0002082952050000244
2. Calculate the autocorrelation ρ for round 3 using equation 419(3,1)
Figure BDA0002082952050000245
There are 2 divisions, 10 multiplications, 8 additions, and 7 subtractions in calculating the autocorrelation with a delay of 1 for the calculation window 505.
The autocorrelation with a delay of 1 is calculated for computation windows 503, 504, and 505, respectively, using iterative algorithm 2 below.
An autocorrelation with a delay of 1 is calculated for the calculation window 503:
1. initializing the 1 st round components with equations 402,426, and 427
Figure BDA0002082952050000246
SX1And covX(1,1)
Figure BDA0002082952050000247
Figure BDA0002082952050000248
Figure BDA0002082952050000249
2. Calculate the autocorrelation ρ for round 1 using equation 428(1,1)
Figure BDA00020829520500002410
There are 2 divisions, 7 multiplications, 8 additions, and 10 subtractions in calculating the autocorrelation with a delay of 1 for the calculation window 503.
An autocorrelation with a delay of 1 is calculated for the calculation window 504:
1. iteratively calculating the round 2 components using equations 430,431, and 432, respectively
Figure BDA00020829520500002411
SX2And covX(2,1):
Figure BDA00020829520500002412
Figure BDA0002082952050000251
Figure BDA0002082952050000252
2. Calculate the autocorrelation ρ of round 2 using equation 433(2,1):
Figure BDA0002082952050000253
The computation window 504 iteratively computes the autocorrelation with a delay of 1 for a total of 2 divisions, 7 multiplications, 10 additions, and 7 subtractions.
An autocorrelation with a delay of 1 is calculated for the calculation window 505:
1. iteratively calculating the 3 rd round components using equations 430,431, and 432, respectively
Figure BDA0002082952050000254
SX3And covX(3,1):
Figure BDA0002082952050000255
Figure BDA0002082952050000256
Figure BDA0002082952050000257
2. Use methodProcess 433 calculates autocorrelation ρ of round 3(3,1):
Figure BDA0002082952050000258
The computation window 505 iteratively computes the autocorrelation with a delay of 1 with a total of 2 divisions, 7 multiplications, 10 additions, and 7 subtractions.
The autocorrelation with a delay of 1 is then calculated for each of the calculation windows 503, 504, and 505 using iterative algorithm 3.
An autocorrelation with a delay of 1 is calculated for the calculation window 503:
1. initializing the 1 st round assembly with equations 402,439, and 440
Figure BDA0002082952050000259
SX1And covX(1,1):
Figure BDA00020829520500002510
Figure BDA00020829520500002511
Figure BDA00020829520500002512
Figure BDA0002082952050000261
2. Calculate the autocorrelation ρ of round 1 using equation 441(1,1):
Figure BDA0002082952050000262
There are 2 divisions, 7 multiplications, 8 additions, and 10 subtractions in calculating the autocorrelation with a delay of 1 for the calculation window 503.
An autocorrelation with a delay of 1 is calculated for the calculation window 504:
1. iteratively calculating the round 2 components using equations 443,444, and 445, respectively
Figure BDA0002082952050000263
SX2And covX(2,1):
Figure BDA0002082952050000264
Figure BDA0002082952050000265
Figure BDA0002082952050000266
2. Calculate the autocorrelation ρ for round 2 using equation 446(2,1):
Figure BDA0002082952050000267
The computation window 504 iteratively computes the autocorrelation with a delay of 1 with a total of 2 divisions, 7 multiplications, 9 additions, and 8 subtractions.
An autocorrelation with a delay of 1 is calculated for the calculation window 505:
1. iteratively calculating the 3 rd round components using equations 443,444, and 445, respectively
Figure BDA0002082952050000268
SX3And covX(3,1):
Figure BDA0002082952050000269
Figure BDA00020829520500002610
Figure BDA00020829520500002611
2. Calculate autocorrelation ρ for round 3 using equation 446(3,1):
Figure BDA0002082952050000271
The computation window 505 iteratively computes the autocorrelation with a delay of 1 with a total of 2 divisions, 7 multiplications, 9 additions, and 8 subtractions.
In the above three examples, the average is used for the iterative autocorrelation calculation. And may also be used for autocorrelation iterative computations, with only different operands. In addition, the calculation windows in the above three examples are shifted from left to right. The calculation process is similar when the calculation window moves from right to left but a different set of equations is applied.
Fig. 6-1 illustrates the comparison of the computation load of the conventional autocorrelation algorithm and the iterative autocorrelation algorithm when the delay is 1 when n-4. As shown, the division, multiplication, addition, and subtraction operations of any of the iterative and conventional algorithms are comparable.
Fig. 6-2 illustrates the comparison of the computation load of the conventional autocorrelation algorithm and the iterative autocorrelation algorithm when n is 1,000,000 delay. As shown, any one iterative algorithm has many fewer multiply operations, add operations and subtract operations than the conventional algorithm. Iterative autocorrelation algorithms can complete data that needs to be processed on thousands of computers on a single machine. The method has the advantages of greatly improving the calculation efficiency, reducing the calculation resources and reducing the energy consumption of the calculation equipment, so that the high efficiency and the low consumption of real-time judgment of the given delay repeatability of the streaming data and some scenes of real-time judgment of the given delay repeatability of the streaming data are unlikely to become possible.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described implementations are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (10)

1. A method implemented by a computing system constructed from one or more computing devices, characterized by:
initializing, by a computing system based on a computing device, two or more components of a pre-adjustment computation window of a specified size n (n >1) for a data stream, a delay l (0< l < n), and an autocorrelation of delay l, data elements of the pre-adjustment computation window being stored in a buffer of the computing system;
receiving, by the computing system based on a computing device, a data element;
saving, by the computing system based on the computing device, the received data elements into the buffer;
adjusting, by the computing system based on a computing device, the pre-adjustment computing window by:
removing the oldest accessed or received data element from the pre-adjustment computing window; and
adding the accessed or received data elements to the pre-adjustment computing window;
iteratively calculating, by the computing system based on a computing device, two or more components of autocorrelation with a delay of l for the adjusted computing window based on at least the two or more components of autocorrelation with a delay of l for the pre-adjustment computing window, and avoiding access and using all data elements in the adjusted computing window to reduce data access delay, improve computational efficiency, save computational resources, and reduce energy consumption of the computing system in iteratively calculating the two or more components; and
generating, by the computing system based on the computing device, an autocorrelation with a delay of l for the adjusted computation window based on one or more components iteratively computed for the adjusted computation window.
2. The computing system implemented method of claim 1, wherein: the method also includes storing the received data elements in the buffer for each of the plurality of data elements to be added, adjusting the pre-adjustment computation window, iteratively computing two or more components, and generating an autocorrelation with a delay of l for the adjusted computation window.
3. The computing system implemented method of claim 2, wherein: the generating of the autocorrelation with a delay of l for the adjusted calculation window is performed if and only if the autocorrelation is accessed.
4. The computing system implemented method of claim 3, wherein: generating the autocorrelation with the delay of l for the adjusted computation window further includes indirectly iteratively computing, by the computing-device based computing system, one or more components of the autocorrelation with the delay of l for the adjusted computation window, the indirectly iteratively computing the one or more components including individually computing the one or more components based on one or more components other than the component to be computed.
5. A computing system, characterized by:
one or more processors;
one or more storage media; and
one or more computing modules that, when executed by at least one of the one or more processors, perform a method comprising:
a. initializing a delay l (0< l < n), and more than two components of an autocorrelation of delay l, for a pre-alignment computation window of a buffer of a data stream stored on one or more storage devices on the computing system;
b. receiving a data element to be added to the pre-adjustment computing window;
c. saving the data element to the buffer;
d. adjusting the pre-adjustment computation window, comprising:
removing the oldest received data element from the pre-adjustment computation window; and
adding data elements to be added into the calculation window before adjustment;
e. iteratively calculating the autocorrelation with delay l for the adjusted computation window based on at least two or more components with delay l of the pre-adjustment computation window, and avoiding accessing and using all data elements in the adjusted computation window to reduce data access delay in the process of iteratively calculating the two or more components, thereby improving computation efficiency, saving computation resources and reducing energy consumption of the computation system; and
f. an autocorrelation is generated for the adjusted computation window with a delay of l based on one or more components that iterate a computation for the adjusted computation window.
6. The computing system of claim 5, wherein: the one or more computing modules, when executed by at least one of the one or more processors, perform b, c, d, e, and f a plurality of times.
7. The computing system of claim 6, wherein: f is performed if and only if the autocorrelation is accessed for which the delay of the adjusted computation window is l.
8. The computing system of claim 7, wherein: the method further includes indirectly iteratively calculating, by the computing system, the autocorrelation with the delay of l for the adjusted calculation window, the indirectly iteratively calculating the one or more components including calculating the one or more components individually on a per-component basis based on one or more components other than the component to be calculated.
9. A computing system program product for execution on a computing system comprising one or more computing devices, the computing system including one or more processors and one or more storage media, the computing system program product comprising computing device-executable instructions that, when executed by the computing system, perform a method comprising:
initializing a delay l (0< l < n), and more than two components of an autocorrelation of delay l, for a pre-alignment computation window of a specified size n (n >1) of a buffer of a data stream stored on at least one storage medium of the computing system;
receiving a data element to be added to the pre-adjustment computing window;
saving the received data elements in a buffer;
adjusting the pre-adjustment calculation window by:
removing the oldest received data element from the pre-adjustment computation window; and
adding data elements to be added to the pre-adjustment computing window;
iteratively calculating the autocorrelation with delay l for the adjusted computation window based on at least two or more components with delay l of the pre-adjustment computation window, and avoiding accessing and using all data elements in the adjusted computation window to reduce data access delay in the process of iteratively calculating the two or more components, thereby improving computation efficiency, saving computation resources and reducing energy consumption of the computation system; and
an autocorrelation is generated for the adjusted computation window with a delay of l based on one or more components that iterate a computation for the adjusted computation window.
10. The computing system program product of claim 9, wherein: generating the autocorrelation with delay l for the adjusted computation window further includes indirectly iteratively computing one or more components of the autocorrelation with delay l for the adjusted computation window, the indirectly iteratively computing the one or more components including computing the one or more components individually based on one or more components other than the component to be computed.
CN201910478153.XA 2019-06-03 2019-06-03 Method for judging self-set delay repeatability of streaming data in real time Pending CN112035520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910478153.XA CN112035520A (en) 2019-06-03 2019-06-03 Method for judging self-set delay repeatability of streaming data in real time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910478153.XA CN112035520A (en) 2019-06-03 2019-06-03 Method for judging self-set delay repeatability of streaming data in real time

Publications (1)

Publication Number Publication Date
CN112035520A true CN112035520A (en) 2020-12-04

Family

ID=73575795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910478153.XA Pending CN112035520A (en) 2019-06-03 2019-06-03 Method for judging self-set delay repeatability of streaming data in real time

Country Status (1)

Country Link
CN (1) CN112035520A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880582A (en) * 2003-10-16 2013-01-16 英特尔公司 Adaptive input/output buffer and methods thereof
CN105472622A (en) * 2014-09-26 2016-04-06 美国博通公司 Data transmission method
CN106330751A (en) * 2015-06-18 2017-01-11 上海交通大学 Resource dynamic request time window and terminal cache mechanism under heterogeneous network transmission
CN106788399A (en) * 2016-12-22 2017-05-31 浙江神州量子网络科技有限公司 A kind of implementation method of the configurable multichannel coincidence counting device of window time
US9984653B1 (en) * 2015-02-11 2018-05-29 Synaptics Incorporated Method and device for reducing video latency

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880582A (en) * 2003-10-16 2013-01-16 英特尔公司 Adaptive input/output buffer and methods thereof
CN105472622A (en) * 2014-09-26 2016-04-06 美国博通公司 Data transmission method
US9984653B1 (en) * 2015-02-11 2018-05-29 Synaptics Incorporated Method and device for reducing video latency
CN106330751A (en) * 2015-06-18 2017-01-11 上海交通大学 Resource dynamic request time window and terminal cache mechanism under heterogeneous network transmission
CN106788399A (en) * 2016-12-22 2017-05-31 浙江神州量子网络科技有限公司 A kind of implementation method of the configurable multichannel coincidence counting device of window time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HOSSEIN HAMOONI等: "Dispatch:Distributed pattern matching over streaming time series", 《2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA》, 24 January 2019 (2019-01-24), pages 1 - 2 *
王桂玲 等: "基于云计算的流数据集成与服务", 《计算机学报》, vol. 40, no. 1, 30 October 2015 (2015-10-30), pages 107 - 125 *

Similar Documents

Publication Publication Date Title
US9928215B1 (en) Iterative simple linear regression coefficient calculation for streamed data using components
US10659369B2 (en) Decremental autocorrelation calculation for big data using components
US10394809B1 (en) Incremental variance and/or standard deviation calculation for big data or streamed data using components
CN112035521A (en) Method for judging self-set delay repeatability of streaming data in real time
US10394810B1 (en) Iterative Z-score calculation for big data using components
CN112035520A (en) Method for judging self-set delay repeatability of streaming data in real time
CN110515681B (en) Method for judging given delay repeatability of stream data in real time
CN110457340B (en) Method for searching big data self-repeating rule in real time
CN110515680B (en) Method for judging given delay repeatability of big data in real time
US10310910B1 (en) Iterative autocorrelation calculation for big data using components
US10191941B1 (en) Iterative skewness calculation for streamed data using components
CN112035791A (en) Method for judging self-given delay repeatability of big data in real time
CN111708972A (en) Method for judging concentration degree of stream data distribution density in real time
CN112035792A (en) Method for judging self-given delay repeatability of big data in real time
US10235414B1 (en) Iterative kurtosis calculation for streamed data using components
CN111488380A (en) Method for judging asymmetry of stream data distribution in real time
CN111414577A (en) Method for searching self-repeating rule of streaming data in real time
US10225308B1 (en) Decremental Z-score calculation for big data or streamed data using components
CN110363321B (en) Method for predicting big data change trend in real time
US10262031B1 (en) Decremental kurtosis calculation for big data or streamed data using components
US10079910B1 (en) Iterative covariance calculation for streamed data using components
CN110909305B (en) Method for judging data flow change isotropy and degree thereof in real time
CN112035505A (en) Method for judging concentration degree of big data distribution density in real time
US10282445B1 (en) Incremental kurtosis calculation for big data or streamed data using components
CN110362365B (en) Method for predicting change trend of stream data in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination