WO2012092224A2 - Filtering queried data on data stores - Google Patents
Filtering queried data on data stores Download PDFInfo
- Publication number
- WO2012092224A2 WO2012092224A2 PCT/US2011/067307 US2011067307W WO2012092224A2 WO 2012092224 A2 WO2012092224 A2 WO 2012092224A2 US 2011067307 W US2011067307 W US 2011067307W WO 2012092224 A2 WO2012092224 A2 WO 2012092224A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- filter criterion
- store
- filtered
- subset
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24532—Query optimisation of parallel queries
Definitions
- a user or a data-driven process may request a particular subset of data by requesting from the data store a query specified in a query language, such as the Structured Query Language (SQL).
- the data store may receive the query, process it using a query processing engine (e.g., a software pipeline comprising components that perform various parsing operations on the query, such as associating names in the query with the named objects of the database and identifying the operations specified by various operators), apply the operations specified by the parsed query to the stored data, and return the query result that has been specified by the query.
- a query processing engine e.g., a software pipeline comprising components that perform various parsing operations on the query, such as associating names in the query with the named objects of the database and identifying the operations specified by various operators
- the query result may comprise a set of records specified by the query, a set of attributes of such records, or a result calculated from the data (e.g., a count of records matching certain query criteria).
- the result may also comprise a report of an action taken with respect to the stored data, such as a creation or modification of a table or an insertion, update, or deletion of records in a table.
- the database may be distributed over several, and potentially a large number of, data stores.
- different portions of the stored data may be stored in one or more data stores in a server farm.
- a machine receiving the query may identify which data stores are likely to contain the data targeted by the query, and may send the query to one or more of those data stores.
- Each such data store may apply the query to the data stored therein, and may send back a query result.
- the query results may be combined to generate an aggregated query result.
- one machine may coordinate the process of distributing the query to the involved data stores and aggregating the query results. Techniques such as the MapReduce framework have been devised to achieve such distribution and aggregation in an efficient manner.
- the query language itself may promote the complexity of queries to be handled by the data store, including nesting, computationally intensive similarity comparisons of strings and other data types, and modifications to the structure of the database. Additionally, the logical processes applied by the query processing engine of a data store may be able to answer complicated queries in an efficient manner, and may even improve the query by using techniques such as query optimization. As a result of these and other processes, the evaluation of a query by a data store may consume a large amount of computational resources.
- a distributed database architecture wherein a data store also executes sophisticated queries may compromise some security principles, since the machines that are storing the data are also permitted to execute potentially hazardous or malicious operations on the data. Additionally, the query processing engines may even permit the execution of arbitrary code on the stored data (e.g., an agent scenario wherein an executable module is received from a third party and executed against the stored data).
- a security principle that separates the storage of the data (on a first set of machines) and the execution of complex computation, including arbitrary code, on the data (allocated to a second set of machines) may present several security advantages, such as a data item partition between stored data and a compromised machine.
- the data store does not utilize a query processing engine that might impose significant computational costs, reduce performance in fulfilling requests, and/or permit the execution of arbitrary code on the stored data.
- the data store is also capable of providing only a subset of data stored therein. The data store achieves this result by accepting requests specifying one or more filter criteria, each of which reduces the requested amount of data in a particular manner.
- the request may include a filter criterion specifying a particular filter criterion value, and may request only records having that filter criterion value for a particular filter criterion (e.g., in a data store configured to store data representing events, the filter criterion may identity a type of event or a time when the event occurred).
- the request therefore specifies only various filter criteria, and the data store is capable of providing the data that satisfy the filter criteria, but is not configured to process queries that may specify complex operations.
- This configuration may therefore promote the partitioning of a distributed database into a set of data nodes configured to store and provide data, and a set of compute nodes capable of applying complex queries (including arbitrary code).
- FIG. 1 is an illustration of an exemplary scenario featuring an application of a query to a data set distributed over several data stores.
- FIG. 2 is an illustration of an exemplary scenario featuring an application of a request for data from a data set stored by a data store.
- FIG. 3 is an illustration of an exemplary scenario featuring an application of a request featuring at least one filter criterion for data from a data set stored by a data store in accordance with the techniques presented herein.
- Fig. 4 is a flow chart illustrating an exemplary method of fulfilling requests targeting a data set of a data set.
- Fig. 5 is a flow chart illustrating an exemplary method of fulfilling requests targeting a data set of a data set.
- FIG. 6 is an illustration of an exemplary computer-readable medium comprising processor-executable instructions configured to embody one or more of the provisions set forth herein.
- FIG. 7 is an illustration of an exemplary scenario featuring an indexing of data items stored by a data set.
- FIG. 8 is an illustration of an exemplary scenario featuring a partitioning of data items stored by a data set.
- FIG. 9 is an illustration of an exemplary scenario featuring a data item processor set comprising data item processors configured to filter data items in response to a request featuring at least one filter criterion.
- FIG. 10 illustrates an exemplary computing environment wherein one or more of the provisions set forth herein may be implemented.
- the data store may comprise a computer equipped with a storage component (e.g., a memory circuit, a hard disk drive, a solid-state storage device, or a magnetic or optical storage disc) whereupon a set of data is stored, and may be configured to execute software that satisfies requests to access the data that may be received from various users and/or processes.
- a storage component e.g., a memory circuit, a hard disk drive, a solid-state storage device, or a magnetic or optical storage disc
- the stored data may be voluminous, potentially scaling to millions or billions of records stored in one table and/or a large number of tables, and/or complex, such as a large number of
- the data set may be stored on a plurality of data stores.
- two or more data stores may store identical copies of the data set.
- This configuration may be advantageous for promoting availability (e.g., one data store may respond to a request for data when another data store is occupied or offline).
- the data set may be distributed over the data stores, such that each data store stores a portion of the data set.
- This configuration may be advantageous for promoting efficiency (e.g., a distribution of the computational burden of satisfying a request for a particular set of data, such as a particular record, may be limited to the data store that is storing the requested data).
- dozens or hundreds of data stores may be provided, such as in a server farm comprising a very large number of data stores that together store and provide access to a very large data set.
- Fig. 1 presents an exemplary scenario 10 featuring a first architecture for applying a query 14 submitted by a user 12 to a data set 20, comprising a set of data tables 22 each storing a set of records 26 having particular attributes 24.
- the data set 20 has been distributed across many data stores 18 in various ways.
- the data set 20 may be vertically distributed; e.g., the data set 20 may comprise several data tables 22 storing different types of records 26, and a first data store 18 may store the records 26 of a first data table 22 while a second data store 18 may store the records 26 of a second data table 22.
- the data set 20 may be horizontally distributed; e.g., for a particular data table 22, a first data store 18 may store a first set of records 26, while a second data store 18 may store a second set of records 26.
- This distribution may be arbitrary, or may be based on a particular attribute 24 of the data table 22 (e.g., for an attribute 24 specifying an alphabetic string, a first data store 18 may store records 26 beginning with the letters 'A' through 'L', while a second data store 18 may store records 26 beginning with the letters 'M' to ' ⁇ ).
- a first data store 18 may store a first set of attributes 24 for the records 26 and a second data store 18 may store a second set of attributes 24 for the records 26, or two data stores 18 may redundantly store the same records 26 in order to promote the availability of the records 26 and the rapid evaluation of queries involving the records 26.
- a user or process may submit a query to be applied to the data set 20.
- a Structured Query Language (SQL) query may comprise one or more operations to be applied to the data set 20, such as selecting records 26 from one or more data tables 22 having particular values for particular attributes 24, projecting particular attributes 24 of such records 26, joining attributes 24 of different records 26 to create composite records 26, and applying various other operations to the selected data (e.g., sorting, grouping, or counting the records) before presenting a query result.
- SQL Structured Query Language
- the query may also specify various alterations of the data set 20, such as inserting new records 26, setting various attributes 24 of one or more records 26, deleting records 26, establishing or terminating relationships between semantically related records 26, and altering the layout of the data set 20, such as by inserting, modifying, or deleting one or more data tables 22.
- These operations may also be chained together into a set, sequence, or conditional hierarchy of such operations.
- Variants of the Structured Query Language also support more complex
- Structured Query Language may support the execution of code on the data store; e.g., a query may specify or invoke a stored procedure that is to be executed by the data store on the stored data, or may include an agent, such as an interpretable script or executable binary that is provided to the data store for local execution.
- the data store 18 may comprise a query processing engine, such as a software pipeline comprising components that perform various parsing operations on the query, such as associating names in the query with the named objects of the database and identifying the operations specified by various operators.
- a query processing engine such as a software pipeline comprising components that perform various parsing operations on the query, such as associating names in the query with the named objects of the database and identifying the operations specified by various operators.
- the exemplary scenario 10 of Fig. 1 further presents one technique that is often utilized to apply a query 14 to a data set 20 distributed across many data stores 18.
- a user 12 may submit a query 14 comprising a set of operations 16 that may be applied against the data set 20.
- the operations 16 may be chained together in a logical sequence, e.g., using Boolean operators to specify that the results of particular operations 16 are to be utilized together.
- the query 14 may be delivered to a MapReduce server 28, comprising a computer configured to apply a "MapReduce" technique to distribute the query 14 across the data stores 18 that are storing various portions of the data set 20.
- the MapReduce server 28 may identify that various operations 16 within the query 14 target various portions of the data set 20 that are respectively stored by particular data stores 18. For example, a first operation 16 may target the data stored by a first data store 18 (e.g., a Select operation applied to a data table 22 and/or set of records 26 stored by the first data store 18), while a second operation 16 may target the data stored by a second data store 18. Accordingly, the MapReduce server may decompose the query 14 into various query portions 30, each comprising one or more operations to be performed by a particular data store 18. The data store 18 may receive the query portion 30, apply the operations 16 specified therein, and generate a query result 32 that may be delivered to the MapReduce server 28 (or to another data store 18 for further processing).
- a first operation 16 may target the data stored by a first data store 18 (e.g., a Select operation applied to a data table 22 and/or set of records 26 stored by the first data store 18)
- a second operation 16 may target the data stored by a
- the MapReduce server 28 may then compose the query results 32 provided by the data stores 18 to generate a query result 34 that may be provided to the user 12 in response to the query 14. In this manner, the data stores 18 and the MapReduce server 28 may interoperate to achieve the fulfillment of the query 14.
- the exemplary scenario 10 of Fig. 1 may present some advantages (e.g. , an automated parceling out of the query 14 to multiple data stores 18, which may enable a concurrent evaluation of various query portions 30 that may expedite the evaluation of the query 14).
- the exemplary scenario 10 may also present some disadvantages.
- it may be desirable to devise an architecture for a distributed data set, such as a distributed database, wherein the storage and accessing of data is performed on a first set of devices, while complex computational processes are performed on a second set of devices.
- Such a partitioning may be advantageous, e.g., for improving the security of the data set 20.
- queries 14 to be applied to the data set 20 may be computationally expensive (e.g., involving a large amount of memory), paradoxical (e.g., a recursive query that does not end or that cannot logically be evaluated), or malicious (e.g., overly or covertly involving an unauthorized disclosure or modification of the data set 20).
- the computation may involve the execution of code, such as a query 14 that invokes a stored procedure that has been implemented on a data store 18, or mobile agent scenarios, wherein a third party may provide an "agent" (e.g., an interpretable script or partially or wholly compiled executable) that may be applied to the data set 20. Therefore, the security of the data set 20 may be improved by restricting complex
- a second disadvantage that may arise in the exemplary scenario 10 of Fig. 1 involves the performance of the data set 20.
- a particular data store 18 may be configured to store a query portion 30 that, temporarily or chronically, is frequently accessed, such that the data store 18 receives and handles many queries 14 involving the portion of the data set 20 in a short period of time.
- a query 14 involving complex operations may consume computing resources of the data store 18 (e.g., memory, processor capacity, and bandwidth) that may not be available to fulfill other queries 14. Therefore, a single complex query 14 may forestall the evaluation and fulfillment of other queries 14 involving the same data stored by the data store 18.
- Fig. 2 presents an exemplary scenario 40 wherein a data store 18 is configured to store a data set 20 comprising a large number of record 26 (e.g., 50,000 records).
- a user 12 may submit a query 14, which may be received and wholly evaluated by a compute node 42.
- the compute node 42 may comprise, e.g., a query processing engine, which may lexically parse the query 14, identify the operations 16 specified therein, and invoke various components to perform such operations 16, including retrieving data from the data store 18.
- the compute node 42 may simply send a request 44 for a particular set of records 26, such as the records 26 comprising a data table 22 of the data set 20.
- the data store 18 may respond with a request result 48, comprising the requested records 26, to which the compute node 42 may apply some complex computation (e.g., the operations 16 specified in the query 14) and may return a query result 34 to the user 12.
- this exemplary scenario 40 illustrates an inefficiency in this rigid partitioning of responsibilities between the compute node 42 and the data store 18.
- the query 14 may request the retrieval of a single record 26 (e.g., a record 26 of an employee associated with a particular identifier), but the data table 22 stored by the data store 18 may include many such records 26.
- the data store 18 may provide a request result 48 comprising 50,000 records 26 to the compute node 42, even though only one such record 26 is included in the query result 34.
- this record 26 may be easy to identify this record 26 from the scope of the query 14 (e.g., if the query 14 identifies the requested record 26 according to an indexed field having unique identifiers for respective records 26), but because the data store 18 cannot perform computations involved in the evaluation of the query 14, this comparatively simple filtering is not performed by the data store 18.
- This inefficiency may become particularly evident, e.g., if the request result 48 is sent to the compute node 42 over a network 46, which may have limited capacity.
- the sending of many records 26 over the network 46 may impose a rate-limiting factor on the completion of the query 14, thereby imposing a significant delay in the fulfillment of a comparatively simple query 14 involving a small query result 34.
- These and other disadvantages may arise from a hard partitioning of the responsibilities of data stores 18 and compute nodes 42 comprising a data set 20.
- a data store 18 may be configured to store one or more data items of a data set 20 (e.g., various tables 22, attributes 24, and/or records 26 of the data set 20), and to participate in the evaluation of a query 14 against such data items.
- a data set 20 e.g., various tables 22, attributes 24, and/or records 26 of the data set 20
- the data store 18 is not configured to evaluate a query 14; e.g., the data store 18 may not include a query processing engine, and may refuse to accept or evaluate queries 14 formulated in a query language, such as a Structured Query Language (SQL) query.
- SQL Structured Query Language
- the data store 18 is not limited to providing one or more portions of the data store 20 in response to a request 44, which may cause inefficiencies arising from a rigid partitioning, such as illustrated in the exemplary scenario 40 of Fig. 2. Rather, in accordance with these techniques, the data store 18 is configured to accept requests 44 including one or more filtering criteria that define a filtered data subset.
- the data store 18 may store one or more data tables 22 comprising various records 26, but a small number of attributes 24 for the records 26 may be indexed.
- the filtering may involve identifying, retrieving, and providing a data subset of the data set 20, comprising the records 26 having a particular value for one of the indexed attributes 24. Because the application of the filtering criterion to the data set 20 may result in a significant reduction of data to be sent in the filtered data subset 58 while consuming a small fraction of the
- the data store 18 may be configured to perform this filtering in response to the request 44.
- the data store 18 may be configured to refrain from performing more complex computational processes; e.g., the data store 18 may wholly omit a query processing engine, may refuse to accept queries 14 specified in a query language, or may reject requests 44 specifying non-indexed attributes 26.
- the techniques presented herein may achieve greater efficiency and security than in the exemplary scenario 10 of Fig. 1 , while also avoiding the disadvantages presented in the exemplary scenario 40 of Fig. 2.
- FIG. 3 presents an illustration of an exemplary scenario 50 featuring an application of the techniques presented herein to apply a query 14 submitted by a user 12 to a data set 20 storing various data items 52 in order to generate and provide a query result 34.
- access to the data set 20 may be achieved through a data store 18, which, in turn, may be accessed through a compute node 42.
- the data store 18 may refuse to accept the query 14, or may be incapable of evaluating the query 14.
- the data store 18 may accept and evaluate a query 14 only in particular circumstances, e.g., where the query 14 is submitted by an
- the user 12 may submit the query 14 to the compute node 42, which may endeavor to interact with the data store 18 to evaluate the query and provide a query result 34.
- the compute node 42 may examine the query 14 to identify a request 44 comprising one or more filter criteria 54 that may specify a retrieval of particular data items 52 from the data store 18. (e.g., identifying one or more operations 16 of the query 14 that may be expressed as a request 44 for data items 52 satisfying one or more filter criteria 54).
- the data store 18 is configured to receive data items 52 and store received data items 52 in a storage component (e.g., a memory circuit, a hard disk drive, a solid-state storage device, or a magnetic or optical disc) as part of the data set 20. Additionally, the data store 18 is configured to receive requests 44 comprising one or more filter criteria 54. Upon receiving a request 44, the data store 18 may perform a filtering 56 to identify the data items 52 that satisfy the filter criteria 54, and generate a filtered data subset 58 to be returned to the compute node 42. The compute node 42 may receive the filtered data subset 58 and may apply the remainder of the query 14 (e.g., performing complex computations specified by the operations 16 of the query 14 that were not expressed in the request 44).
- a storage component e.g., a memory circuit, a hard disk drive, a solid-state storage device, or a magnetic or optical disc
- requests 44 comprising one or more filter criteria 54.
- the data store 18 may perform a filtering 56 to
- the compute node 42 may send a second or further request 44 to the data set 20 specifying other filter criteria 54, and may utilize the second or further filtered data subsets 58 in the computation.
- the compute node 42 may generate a query result 34, which may be presented to the user 12 (or an automated process) in response to the query 14.
- the configuration of the data store 18, and optionally the compute node 42 may enable the fulfillment of queries 14 in a more efficient and secure manner than presented in the exemplary scenario 10 of Fig. 1 and/or the exemplary scenario 40 of Fig. 2.
- Fig. 4 presents a first embodiment of these techniques, illustrated as an exemplary method 60 of fulfilling requests 44 targeting a data set 20.
- the exemplary method 60 may be performed, e.g., by a data store 18 configured to store or having access to part or all of the data set 20. Additionally, the exemplary method 60 may be implemented, e.g., as a set of software instructions stored in a memory component (e.g., a system memory circuit, a platter of a hard disk drive, a solid state storage device, or a magnetic or optical disc) of the data store 18, that, when executed by the processor of the data store 18, cause the processor to perform the techniques presented herein.
- the exemplary method 60 begins at 62 and involves executing 64 the instructions on the processor.
- the instructions are configured to, upon receiving a data item 52, store 66 the data item 52 in the data set 20.
- the instructions are also configured to, upon receiving 68 a request 44 specifying at least one filter criterion 54, retrieve 70 the data items 52 of the data set 20 satisfying the at least one filter criterion to generate a filtered data subset 58, and to send 72 the filtered data subset 58 in response to the request 44.
- the exemplary method 60 achieves the fulfillment of the request 44 to access the data set 20 without exposing the data store 18 to the security risks, inefficiencies, and consumption of computational resources involved in evaluating a query 14, and so ends at 74.
- Fig. 5 presents a second embodiment of these techniques, illustrated as an exemplary method 80 of applying a query 14 to a data set 20 stored by a data store 18.
- the exemplary method 80 may be performed, e.g., on a device, such as a compute node 42, having a processor. Additionally, the exemplary method 80 may be implemented, e.g., as a set of software instructions stored in a memory component (e.g., a system memory circuit, a platter of a hard disk drive, a solid state storage device, or a magnetic or optical disc) of the compute node 42 or other device, that, when executed by the processor, cause the processor to perform the techniques presented herein.
- a memory component e.g., a system memory circuit, a platter of a hard disk drive, a solid state storage device, or a magnetic or optical disc
- the exemplary method 80 begins at 82 and involves executing 84 the instructions on the processor. More specifically, the instructions are configured to, from the query 14, generate 86 a request 44 specifying at least one filter criterion 54. The instructions are also configured to send 88 the request 44 to the data store 18, and, upon receiving from the data store 18 a filtered data subset 58 in response to the request 44, apply 90 the query 14 to the filtered data subset 56. In this manner, the exemplary method 80 achieves the fulfillment of a query 14 to the data set 20 without exposing the data store 18 to the security risks, inefficiencies, and consumption of computational resources involved in evaluating the query 14, and so ends at 92.
- Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to apply the techniques presented herein.
- Such computer-readable media may include, e.g., computer- readable storage media involving a tangible device, such as a memory
- semiconductor e.g., a semiconductor utilizing static random access memory (SRAM), dynamic random access memory (DRAM), and/or synchronous dynamic random access memory (SDRAM) technologies
- SRAM static random access memory
- DRAM dynamic random access memory
- SDRAM synchronous dynamic random access memory
- a platter of a hard disk drive e.g., a flash memory device, or a magnetic or optical disc (such as a CD-R, DVD-R, or floppy disc)
- magnetic or optical disc such as a CD-R, DVD-R, or floppy disc
- Such computer-readable media may also include (as a class of technologies that are distinct from computer-readable storage media) various types of communications media, such as a signal that may be propagated through various physical phenomena (e.g., an electromagnetic signal, a sound wave signal, or an optical signal) and in various wired scenarios (e.g., via an Ethernet or fiber optic cable) and/or wireless scenarios (e.g., a wireless local area network (WLAN) such as WiFi, a personal area network (PAN) such as Bluetooth, or a cellular or radio network), and which encodes a set of computer- readable instructions that, when executed by a processor of a device, cause the device to implement the techniques presented herein.
- WLAN wireless local area network
- PAN personal area network
- Bluetooth a cellular or radio network
- FIG. 6 An exemplary computer-readable medium that may be devised in these ways is illustrated in Fig. 6, wherein the implementation 100 comprises a computer-readable medium 102 (e.g., a CD-R, DVD-R, or a platter of a hard disk drive), on which is encoded computer-readable data 104.
- This computer-readable data 104 in turn comprises a set of computer instructions 106 configured to operate according to the principles set forth herein.
- the processor-executable instructions 106 may be configured to perform a method of fulfilling requests targeting a data set of a data set, such as the exemplary method 60 of Fig. 4.
- the processor-executable instructions 106 may be configured to implement a method of applying a query to a data set stored by a data store, such as the exemplary method 80 of Fig. 5.
- a data store such as the exemplary method 80 of Fig. 5.
- this computer-readable medium may comprise a nontransitory computer-readable storage medium (e.g., a hard disk drive, an optical disc, or a flash memory device) that is configured to store processor-executable instructions configured in this manner.
- a nontransitory computer-readable storage medium e.g., a hard disk drive, an optical disc, or a flash memory device
- Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
- a first aspect that may vary among embodiments of these techniques relates to the scenarios wherein such techniques may be utilized.
- many types of data stores 18 may be utilized to apply the queries 14 and requests 44 to a data set 20.
- the data stores 18 and/or compute nodes 42 may comprise distinct hardware devices (e.g., different machines or computers), distinct circuits (e.g., field- programmable gate arrays (FPGAs)) operating within a particular hardware device, or software processes (e.g., separate threads) executing within one or more computing environments on one or more processors of a particular hardware device.
- the data stores 18 and/or compute nodes 42 may also comprise virtual processes, such as distributed processes that may be
- respective data stores 18 may internally store the data items 52 comprising the data set 20, or may have access to other data stores 18 that internally store the data items 52 (e.g., a data access layer or device interfacing with a data storage layer or device).
- data sets 20 may be accessed using the techniques presented herein, such as a database, a file system, a media library, an email mailbox, an object set in an object system, or a combination of such data sets 20.
- many types of data items 52 may be stored in the data set 20.
- the queries 14 and/or requests 44 evaluated using the techniques presented herein may be specified in many ways.
- a query 14 may be specified according to a Structured Query Language (SQL) variant, as a language-integrated query (e.g., a LINQ query), or an interpretable script or executable object configured to perform various manipulations of the data items 52 within the data set 20.
- the request 44 may also be specified in various ways, e.g., simply specifying an indexed attribute 24 and one or more values of such attributes 24 of data items 52 to be included in the filtered data subset 58. While the request 44 is limited to one or more filter criteria 54 specifying the data items 52 to be included in the filtered data subset 58, the language, syntax, and/or protocol whereby the query 14 and request 44 are formatted may not significantly affect the application or implementation of the techniques presented herein.
- a second aspect that may vary among embodiments of these techniques relates to the storing of data items 52 in the data set 20 by the data store 18.
- a data store 18 may comprise at least one index, which may correspond to one or more filter criteria 54 (e.g., a particular attribute 24, such that records 26 containing one or more values for the attribute 24 are to be included in the filtered data subset 58).
- a data store 18 may be configured to, upon receiving a data item 52, index the data item in the index according to the filter criterion 54 (e.g. , according to the value of the data item 52 for one or more attributes 24 that may be targeted by a filter criterion 54).
- the data store 18 may then be capable of fulfilling a request 44 by identifying the data items 52 satisfying the filter criteria 54 of the request 44 by using an index corresponding to the filter criterion 54. It may be advantageous to choose attributes 24 of the data items 52 for indexing that are likely to be targeted by filter criteria 54 of requests 44, and to refrain from indexing the other attributes 24 of the data items 52 (e.g., indices have to be maintained as data items 52 change, and it may be disadvantageous to undertake the computational burden of such maintenance in order to index an attribute 24 that is not likely to be frequently included as a filter criterion 54).
- a data store 18 may be desirable to configure a data store 18 to generate and maintain indices for an index set comprising an event index specifying an represented by respective data items 52; a time index specifying a time of an event represented by respective data items 52; and a user index specifying at least one user associated with an event represented by respective data items 52.
- indices for other attributes 24 of this data set 20 such as a uniform resource identifier (URI) of a digital resource involved in the request, a comment field whereupon textual comments regarding particular events may be entered by various users and administrators, or a "blob" field involving a large data set involved in the event (e.g., a system log or a captured image that depicts the event).
- URI uniform resource identifier
- the index may identify data items 52 associated with one or more particular filter criterion values for a particular filter criterion 54 in various ways.
- an index may specify, for a filter criterion value of a filter criterion 54 corresponding to the index, a data item set that identifies the data items having the filter criterion value for the filter criterion 54.
- the index may store, for each filter criterion value of the filter criterion 54, a set of references to the data items 52 associated with the filter criterion value.
- the data item set stored in the index may be accessible in various ways.
- the index may permit incremental writing to the data item set (e.g., indexing a new data item 52 by adding the data item 52 to the data item set of data items having the filter criterion value for the filter criterion), but may permit only atomic reading of the data item set (e.g., for a request 44 specifying a particular filter criterion value for a particular filter criterion 54, the index may read and present the entire data item set, comprising the entire set of references to such data items 52).
- the data store 18 may, upon receipt of respective data items 52, store the data items 52 in a data item buffer, such that, when the data item buffer exceeds a data item buffer size threshold (e.g., the capacity of the data item buffer), the data store 18 may add the data items to respective data item sets and empty the data item buffer.
- a data item buffer size threshold e.g., the capacity of the data item buffer
- Fig. 7 presents an illustration of an exemplary scenario 1 10 featuring an indexing of data items 52 in one or more data item sets 1 18 indexed according to an index 1 12.
- the data store 18 may receive various data items 52 (e.g., a set of reported events) and may store such data items 52 in a data set 20.
- the data store 18 may generate an index 1 12, comprising a set of index entries 1 14 including references 1 16 to one or more data items 52 of one or more data item sets 1 18, each corresponding to a different filter criterion value for a filter criterion 54 (e.g., the month and year of a date when an event occurred).
- a filter criterion 54 e.g., the month and year of a date when an event occurred.
- the data store 18 may identify one or more filter criterion values of the data item 52, and may store a reference to the data item 52 stored in an index entry 1 14 of the index 1 12 corresponding to the filter criterion value. The data store 18 may then store the data item 52 in the data set 20 (e.g., by appending the data item 52 to a list of records 26).
- the data store 18 may fulfill the request 44 by retrieving a data item set 1 18 associated with the filter criterion value, and in particular may do so by identifying the index entry 1 14 of the index 1 12 identifying the data items 52 of the data item set 1 18 corresponding to the filter criterion value.
- the data store 18 may then use the references 1 16 stored in the index entry 1 14 to retrieve the data items 52 of the data item set 1 18, and may send such data items 52 as the filtered data subset 58.
- the data store 18 may fulfill the request 44 in an efficient manner by using the index 1 12 corresponding to the filter criterion 54 of the request 44.
- respective index entries 1 14 of an index 1 12 may store, for a first filter criterion value of a filter criterion 54, references to data item partitions corresponding to respective second filter criterion values of a second filter criterion 54.
- Data items 52 may be stored and/or retrieved using this two-tier indexing technique.
- storing a data item 52 may involve using the index 1 12 to identify the index entry 1 14 associated with a first filter criterion value of a first filter criterion 54 for the data item 52, examining the data item partitions referenced by the index entry 1 14 to identify the data item partition associated with a second filter criterion value of a second filter criterion 54 for the data item 52, and storing the data item 52 in the data item partition.
- retrieving data items 52 having a particular first filter criterion value of a first filter criterion 54 and a particular second filter criterion value of a second filter criterion 54 may involve using the index 1 12 to identify the index entry 1 14 associated with the first filter criterion value; examining the data item partitions referenced in the index entry 1 14 to identify the data item partition associated with the second filter criterion value; and retrieving and sending the data item partition in response to the request 44.
- a data store 18 may configure an index as a set of partitions, each including the data items 52 (or references thereto, e.g., a memory reference or URI where the data item 52 may be accessed, or a distinctive identifier of the data item 52, such as a key value of a key field of a data table 22) satisfying a particular filter criterion 54.
- the data store 18 may generate various partitions, such as small sections of memory allocated to store data items 52 having a particular filter criterion value of a particular filter criterion 54.
- the data store 18 may store the data item 52 in the corresponding partition; and upon receiving a request 44 specifying a filter criterion value of a particular filter criterion 54, the data store 18 may the data item partition storing the data items 52 having the filter criterion value for the filter criterion, and send the data item partition as the filtered data subset 58.
- two or more indices may be utilized to group data items according to two or more filter criteria 54.
- Fig. 8 presents an illustration of an exemplary scenario 120 featuring a partitioning of data items 52 into respective data item partitions 122.
- the data store 18 may receive various data items 52 (e.g., a set of reported events) and may store such data items 52 in a data set 20.
- the data store 18 may again generate an index 1 12 (not shown), comprising a set of index entries 1 14 including references 1 16 to one or more data items 52 of one or more data item sets 1 18, each corresponding to a different filter criterion value for a filter criterion 54 (e.g. , the month and year of a date when an event occurred).
- a filter criterion 54 e.g. , the month and year of a date when an event occurred.
- the data items 52 are stored in a manner that is partitioned according to the filter criterion value.
- the data store 18 may identify one or more filter criterion values of the data item 52, and may identify a data item partition 122 associated with the filter criterion value. The data store 18 may then store the data item 52 in the data item partition 122 corresponding to the filter criterion value.
- a user 12 submits a request 44 to a data store 18 (either directly or indirectly, e.g.
- the data store 18 may fulfill the request 44 by retrieving a data item set 1 18 associated with the filter criterion value, and in particular may do so by identifying the data item partition 122 associated with the filter criterion value. The data store 18 may then retrieve the entire data item partition 122, and may send the entire data item partition 122 to the user 12.
- Additional data item partitions 122 may be retrieved and send in response to other filter criteria 54 (e.g., two or more filter criterion values for a particular filter criterion 54, or a filter criterion value specified in the alternative for each of two or more different filter criteria 54).
- the data store 18 may identify and provide the data items 52 satisfying the filter criterion 54 in an efficient manner by using the data item indices 122 corresponding to one or more filter criteria 54 specified in the request 44.
- filter criteria 54 e.g., two or more filter criterion values for a particular filter criterion 54, or a filter criterion value specified in the alternative for each of two or more different filter criteria 54.
- the data store 18 may identify and provide the data items 52 satisfying the filter criterion 54 in an efficient manner by using the data item indices 122 corresponding to one or more filter criteria 54 specified in the request 44.
- Those of ordinary skill in the art may devise many ways of storing data items 52 of a data set 20
- a third aspect that may vary among embodiments of these techniques involves the configuration of a data store 18 and/or a compute node 42 to retrieve data items 52 satisfying the filter criteria 54 of a request 44.
- the request 44 may comprise many types of filter criteria 54.
- the request 44 may specify a first filtered data subset 58 that may relate to the data items 52 comprising a second filtered data subset 58, and the data store 18 may utilize the first filtered data subset 58 while generating the second filtered data subset 58.
- a query 14 may involve a request 44 specifying another filtered data subset 58 (e.g., in the query 14 "select username from users where user.id in (10, 22, 53, 67)", a request 44 is filtered according to a set of numeric user IDs presented as a filtered data subset 58).
- a query 14 may involve a first request 44 specifying a first filtered data subset 58, which may be referenced in a second request 44 specifying a second filtered data subset 58.
- a request 44 may reference a filtered data subset 58 generated by another request 44, including an earlier request 44 provided and processed while evaluate the same query 14.
- a data store 18 when presented with a request 44 including at least one filter criterion 54, may be configured to retrieve from the data set 20 the content items 52 satisfying respective filter criteria 54 of the request 44 (e.g., by utilizing an index 1 12 to identify a data set 1 18 and/or data item partition 122, as in the exemplary scenario 1 1 0 of Fig. 7 and the exemplary scenario 120 of Fig. 8).
- the data store 18 may retrieve all of the data items 52 of the data set 20, and may send (e.g., to a compute node 42 or user 12 submitting the request 44 to the data store 18) only the data items 52 satisfying the at least one filter criterion.
- the filter of data items 52 is achieved during the indexing of the data items 52 upon receipt; but in the latter example, the filtering of data items 52 is achieved during the sending of the data items 52. It may be difficult to filter all of the data items 52 in realtime, e.g., in order to fulfill a request 44. However, some techniques may be utilized to expedite the realtime filtering of the data items 52, alternatively or in combination with the use of indices 1 12 and/or partitions 122.
- Fig. 9 presents an illustration of an exemplary scenario 130 featuring one technique for implementing a realtime filtering of data items 52.
- a data store 18 receives from a user 12 a request 44 specifying at least one filter criterion 54, and endeavors to fulfill the request 44 by providing a filtered data subset 58 comprising only the data items 52 satisfying the filter criteria 54 of the request 44.
- the data store 18 retrieves all of the data items 52 from the data set 20, and then applies a data item processor set 132 to the entire set of data items 52 in order to identify and provide only the data items 52 satisfying the filter criteria 54.
- the data item processor set 132 may comprise, e.g., a set of data item processors 134, each having a state 136 and at least one filtering condition (e.g., a logical evaluation of any particular data item 52 to identify whether or not a filtering criterion 54 is satisfied).
- the data item processors 134 may be individually configured to, upon receiving a data item 52, update the state 136 of the data item processor 134; and when the state 136 of the data item processor 134 satisfies the at least one filtering condition, the data item processor 134 may authorize the data item 52 to be sent (e.g., by including the data item 52 in the filtered data subset 58, or by sending the data item 52 to a different data item processor 134 for further evaluation).
- the data item processors 134 may therefore be interconnected and may interoperate, e.g. , as a realtime processing system that evaluates data items 52 using a state machine. Accordingly, the data store 18 may invoke the data item processor set 132 upon the data items 52 retrieved from the data set 20, and may send only the data items 52 that have been authorized to be sent by the data item processor set 132. In this manner, the data store 18 may achieve an ad hoc, realtime evaluation of all data items 52 of the data set 20 to identify and deliver the data items 52 satisfying the filter criteria 54 of the request 44 without having to generate, maintain, or utilize indices 1 12 or partitions 122.
- the data store 18 may, before providing a filtered data subset 58 in response to a request 44 (and optionally before retrieving the data items 18 matching the filter criteria 54 of the request 44), estimate the size of the filtered data subset 58.
- a request 44 received by the data store 18 may involve a comparatively large filtered data subset 58 that may take a significant amount of computing resources to retrieve and send in response to the request 44.
- an embodiment may first estimate a filtered data subset size of the filtered data subset 58 (e.g., a total estimated number of records 26 or data items 52 to be included in the filtered data subset 58), and may endeavor to verify that the retrieval of the filtered data subset 58 of this size is acceptable to the requester.
- a filtered data subset size of the filtered data subset 58 e.g., a total estimated number of records 26 or data items 52 to be included in the filtered data subset 58
- an embodiment may be configured to, before sending a filtered data subset 58 in response to a request 44, estimate the filtered data subset size of the filtered data subset 58 and send the filtered subset data size to the requester, and may only proceed with the retrieval and sending of the filtered data subset 58 upon receiving a filtered data subset authorization from the requester.
- a compute node 42 may be configured to, after sending a request 44 specifying at least one filter criterion 54 and before receiving a filtered data subset 58 in response to the request 44, receive from the data store 18 an estimate of a filtered data subset size of the filtered data subset 58, and may verify the filtered data subset size (e.g.
- the compute node 42 may generate and send to the data store 18 a filtered data subset authorization, and may subsequently receive the filtered data subset 58.
- Those of ordinary skill in the art may devise many ways of configuring a data store 18 and/or a compute node 42 to retrieve data items 52 from the data set 20 in accordance with the techniques presented herein.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- an application running on a controller and the controller can be a component.
- One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
- article of manufacture as used herein is intended to
- Fig. 10 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein.
- the operating environment of Fig. 10 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment.
- Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- PDAs Personal Digital Assistants
- multiprocessor systems consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- embodiments are described in the general context of "computer readable instructions" being executed by one or more computing devices.
- Computer readable instructions may be distributed via computer readable media (discussed below).
- Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
- APIs Application Programming Interfaces
- data structures such as RAM, ROM, Flash memory, etc.
- functionality of the computer readable instructions may be combined or distributed as desired in various environments.
- Fig. 10 illustrates an example of a system 140 comprising a computing device 142 configured to implement one or more embodiments provided herein.
- computing device 142 includes at least one processing unit 146 and memory 148.
- memory 148 may be volatile (such as RAM, for example), nonvolatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated in Fig. 10 by dashed line 144.
- device 142 may include additional features and/or functionality.
- device 142 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like.
- additional storage is illustrated in Fig. 10 by storage 150.
- computer readable instructions to implement one or more embodiments provided herein may be in storage 150.
- Storage 150 may also store other computer readable instructions to implement an operating system, an application program, and the like.
- Computer readable instructions may be loaded in memory 148 for execution by processing unit 146, for example.
- Computer readable media includes computer storage media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data.
- Memory 148 and storage 150 are examples of computer storage media.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 142. Any such computer storage media may be part of device 142.
- Device 142 may also include communication connection(s) 156 that allows device 142 to communicate with other devices. Communication
- connection(s) 156 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency
- Communication connection(s) 156 may include a wired connection or a wireless connection.
- Communication connection(s) 156 may transmit and/or receive communication media.
- Computer readable media may include communication media.
- Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- Device 142 may include input device(s) 154 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device.
- Output device(s) 152 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 142.
- Input device(s) 154 and output device(s) 152 may be connected to device 142 via a wired connection, wireless connection, or any combination thereof.
- an input device or an output device from another computing device may be used as input device(s) 154 or output device(s) 152 for computing device 142.
- Components of computing device 142 may be connected by various interconnects, such as a bus.
- Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like.
- PCI Peripheral Component Interconnect
- USB Universal Serial Bus
- IEEE 1394 Firewire
- optical bus structure and the like.
- components of computing device 142 may be interconnected by a network.
- memory 148 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
- a computing device 160 accessible via network 158 may store computer readable instructions to implement one or more embodiments provided herein.
- Computing device 142 may access computing device 160 and download a part or all of the computer readable instructions for execution.
- computing device 142 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 142 and some at computing device 160.
- one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described.
- the order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be
- the word "exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
- the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, "X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013547600A JP5990192B2 (en) | 2010-12-28 | 2011-12-24 | Filtering query data in the data store |
CA2822900A CA2822900C (en) | 2010-12-28 | 2011-12-24 | Filtering queried data on data stores |
EP11854242.2A EP2659403A4 (en) | 2010-12-28 | 2011-12-24 | Filtering queried data on data stores |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/979,467 US10311105B2 (en) | 2010-12-28 | 2010-12-28 | Filtering queried data on data stores |
US12/979,467 | 2010-12-28 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2012092224A2 true WO2012092224A2 (en) | 2012-07-05 |
WO2012092224A3 WO2012092224A3 (en) | 2012-10-11 |
Family
ID=46318296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2011/067307 WO2012092224A2 (en) | 2010-12-28 | 2011-12-24 | Filtering queried data on data stores |
Country Status (7)
Country | Link |
---|---|
US (2) | US10311105B2 (en) |
EP (1) | EP2659403A4 (en) |
JP (1) | JP5990192B2 (en) |
CN (1) | CN102682052B (en) |
CA (1) | CA2822900C (en) |
HK (1) | HK1174111A1 (en) |
WO (1) | WO2012092224A2 (en) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8918388B1 (en) * | 2010-02-26 | 2014-12-23 | Turn Inc. | Custom data warehouse on top of mapreduce |
US20130091266A1 (en) | 2011-10-05 | 2013-04-11 | Ajit Bhave | System for organizing and fast searching of massive amounts of data |
US9697174B2 (en) | 2011-12-08 | 2017-07-04 | Oracle International Corporation | Efficient hardware instructions for processing bit vectors for single instruction multiple data processors |
US9792117B2 (en) | 2011-12-08 | 2017-10-17 | Oracle International Corporation | Loading values from a value vector into subregisters of a single instruction multiple data register |
US10534606B2 (en) | 2011-12-08 | 2020-01-14 | Oracle International Corporation | Run-length encoding decompression |
US9727606B2 (en) | 2012-08-20 | 2017-08-08 | Oracle International Corporation | Hardware implementation of the filter/project operations |
US9563658B2 (en) | 2012-08-20 | 2017-02-07 | Oracle International Corporation | Hardware implementation of the aggregation/group by operation: hash-table method |
US9600522B2 (en) * | 2012-08-20 | 2017-03-21 | Oracle International Corporation | Hardware implementation of the aggregation/group by operation: filter method |
CN103034678A (en) * | 2012-11-01 | 2013-04-10 | 沈阳建筑大学 | RkNN (reverse k nearest neighbor) inquiring method based on Voronoi diagram |
US9152671B2 (en) * | 2012-12-17 | 2015-10-06 | General Electric Company | System for storage, querying, and analysis of time series data |
CN103257987A (en) * | 2012-12-30 | 2013-08-21 | 北京讯鸟软件有限公司 | Rule-based distributed log service implementation method |
US8694503B1 (en) * | 2013-07-31 | 2014-04-08 | Linkedin Corporation | Real-time indexing of data for analytics |
US11113054B2 (en) | 2013-09-10 | 2021-09-07 | Oracle International Corporation | Efficient hardware instructions for single instruction multiple data processors: fast fixed-length value compression |
US9606921B2 (en) | 2013-09-21 | 2017-03-28 | Oracle International Corporation | Granular creation and refresh of columnar data |
CN106062739B (en) * | 2013-11-12 | 2020-02-28 | 皮沃塔尔软件公司 | Dynamic stream computation topology |
KR102103543B1 (en) * | 2013-11-28 | 2020-05-29 | 삼성전자 주식회사 | All-in-one data storage device having internal hardware filter, method thereof, and system having the data storage device |
US9886310B2 (en) * | 2014-02-10 | 2018-02-06 | International Business Machines Corporation | Dynamic resource allocation in MapReduce |
US10007692B2 (en) * | 2014-03-27 | 2018-06-26 | Microsoft Technology Licensing, Llc | Partition filtering using smart index in memory |
US10409835B2 (en) * | 2014-11-28 | 2019-09-10 | Microsoft Technology Licensing, Llc | Efficient data manipulation support |
US9892164B2 (en) | 2015-01-30 | 2018-02-13 | International Business Machines Corporation | Reducing a large amount of data to a size available for interactive analysis |
US10073885B2 (en) | 2015-05-29 | 2018-09-11 | Oracle International Corporation | Optimizer statistics and cost model for in-memory tables |
US10067954B2 (en) | 2015-07-22 | 2018-09-04 | Oracle International Corporation | Use of dynamic dictionary encoding with an associated hash table to support many-to-many joins and aggregations |
US10666574B2 (en) | 2015-09-28 | 2020-05-26 | Amazon Technologies, Inc. | Distributed stream-based database triggers |
US10565209B2 (en) * | 2015-12-01 | 2020-02-18 | International Business Machines Corporation | Injecting outlier values |
US10061714B2 (en) | 2016-03-18 | 2018-08-28 | Oracle International Corporation | Tuple encoding aware direct memory access engine for scratchpad enabled multicore processors |
US10061832B2 (en) | 2016-11-28 | 2018-08-28 | Oracle International Corporation | Database tuple-encoding-aware data partitioning in a direct memory access engine |
US10402425B2 (en) | 2016-03-18 | 2019-09-03 | Oracle International Corporation | Tuple encoding aware direct memory access engine for scratchpad enabled multi-core processors |
US10055358B2 (en) | 2016-03-18 | 2018-08-21 | Oracle International Corporation | Run length encoding aware direct memory access filtering engine for scratchpad enabled multicore processors |
US11188542B2 (en) * | 2016-05-03 | 2021-11-30 | Salesforce.Com, Inc. | Conditional processing based on data-driven filtering of records |
US10599488B2 (en) | 2016-06-29 | 2020-03-24 | Oracle International Corporation | Multi-purpose events for notification and sequence control in multi-core processor systems |
US10380058B2 (en) | 2016-09-06 | 2019-08-13 | Oracle International Corporation | Processor core to coprocessor interface with FIFO semantics |
US10783102B2 (en) | 2016-10-11 | 2020-09-22 | Oracle International Corporation | Dynamically configurable high performance database-aware hash engine |
US10176114B2 (en) | 2016-11-28 | 2019-01-08 | Oracle International Corporation | Row identification number generation in database direct memory access engine |
US10459859B2 (en) | 2016-11-28 | 2019-10-29 | Oracle International Corporation | Multicast copy ring for database direct memory access filtering engine |
US10725947B2 (en) | 2016-11-29 | 2020-07-28 | Oracle International Corporation | Bit vector gather row count calculation and handling in direct memory access engine |
US11157690B2 (en) | 2017-02-22 | 2021-10-26 | Microsoft Technology Licensing, Llc | Techniques for asynchronous execution of computationally expensive local spreadsheet tasks |
US10725799B2 (en) * | 2017-02-22 | 2020-07-28 | Microsoft Technology Licensing, Llc | Big data pipeline management within spreadsheet applications |
CN107451204B (en) * | 2017-07-10 | 2021-01-05 | 创新先进技术有限公司 | Data query method, device and equipment |
CN108920602B (en) * | 2018-06-28 | 2021-12-14 | 北京京东尚科信息技术有限公司 | Method and apparatus for outputting information |
US10997177B1 (en) | 2018-07-27 | 2021-05-04 | Workday, Inc. | Distributed real-time partitioned MapReduce for a data fabric |
CN111163056B (en) * | 2019-12-06 | 2021-08-31 | 西安电子科技大学 | Data confidentiality method and system aiming at MapReduce calculation |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5421008A (en) * | 1991-11-08 | 1995-05-30 | International Business Machines Corporation | System for interactive graphical construction of a data base query and storing of the query object links as an object |
US5560007A (en) * | 1993-06-30 | 1996-09-24 | Borland International, Inc. | B-tree key-range bit map index optimization of database queries |
US5584024A (en) * | 1994-03-24 | 1996-12-10 | Software Ag | Interactive database query system and method for prohibiting the selection of semantically incorrect query parameters |
US5787411A (en) * | 1996-03-20 | 1998-07-28 | Microsoft Corporation | Method and apparatus for database filter generation by display selection |
US6243703B1 (en) * | 1997-10-14 | 2001-06-05 | International Business Machines Corporation | Method of accessing and displaying subsystem parameters including graphical plan table data |
US6493700B2 (en) | 1997-10-14 | 2002-12-10 | International Business Machines Corporation | System and method for specifying custom qualifiers for explain tables |
KR20030047889A (en) * | 2000-05-26 | 2003-06-18 | 컴퓨터 어소시에이츠 싱크, 인코포레이티드 | System and method for automatically generating database queries |
US6925608B1 (en) * | 2000-07-05 | 2005-08-02 | Kendyl A. Roman | Graphical user interface for building Boolean queries and viewing search results |
US6785668B1 (en) * | 2000-11-28 | 2004-08-31 | Sas Institute Inc. | System and method for data flow analysis of complex data filters |
US6775792B2 (en) | 2001-01-29 | 2004-08-10 | Snap Appliance, Inc. | Discrete mapping of parity blocks |
US6931390B1 (en) * | 2001-02-27 | 2005-08-16 | Oracle International Corporation | Method and mechanism for database partitioning |
US6778977B1 (en) * | 2001-04-19 | 2004-08-17 | Microsoft Corporation | Method and system for creating a database table index using multiple processors |
US6789071B1 (en) * | 2001-04-20 | 2004-09-07 | Microsoft Corporation | Method for efficient query execution using dynamic queries in database environments |
US20020194166A1 (en) * | 2001-05-01 | 2002-12-19 | Fowler Abraham Michael | Mechanism to sift through search results using keywords from the results |
US7315894B2 (en) | 2001-07-17 | 2008-01-01 | Mcafee, Inc. | Network data retrieval and filter systems and methods |
AU2002322835A1 (en) | 2001-08-01 | 2003-02-17 | Kim Updike | Methods and apparatus for fairly placing players in bet positions |
US7024414B2 (en) * | 2001-08-06 | 2006-04-04 | Sensage, Inc. | Storage of row-column data |
US8868544B2 (en) * | 2002-04-26 | 2014-10-21 | Oracle International Corporation | Using relational structures to create and support a cube within a relational database system |
US7136850B2 (en) * | 2002-12-20 | 2006-11-14 | International Business Machines Corporation | Self tuning database retrieval optimization using regression functions |
US7584264B2 (en) | 2003-08-19 | 2009-09-01 | Google Inc. | Data storage and retrieval systems and related methods of storing and retrieving data |
US20070078826A1 (en) * | 2005-10-03 | 2007-04-05 | Tolga Bozkaya | Analytic enhancements to model clause in structured query language (SQL) |
US20050131893A1 (en) * | 2003-12-15 | 2005-06-16 | Sap Aktiengesellschaft | Database early parallelism method and system |
US20050192942A1 (en) * | 2004-02-27 | 2005-09-01 | Stefan Biedenstein | Accelerated query refinement by instant estimation of results |
US7299220B2 (en) * | 2004-03-31 | 2007-11-20 | Microsoft Corporation | Constructing database object workload summaries |
US7650331B1 (en) | 2004-06-18 | 2010-01-19 | Google Inc. | System and method for efficient large-scale data processing |
US7756881B2 (en) * | 2006-03-09 | 2010-07-13 | Microsoft Corporation | Partitioning of data mining training set |
JP4696011B2 (en) * | 2006-03-24 | 2011-06-08 | パナソニック株式会社 | Gravure coating equipment |
US7461050B2 (en) * | 2006-03-30 | 2008-12-02 | International Business Machines Corporation | Methods of cost estimation using partially applied predicates |
US20070250470A1 (en) * | 2006-04-24 | 2007-10-25 | Microsoft Corporation | Parallelization of language-integrated collection operations |
US8555288B2 (en) * | 2006-05-17 | 2013-10-08 | Teradata Us, Inc. | Managing database utilities to improve throughput and concurrency |
US7624118B2 (en) * | 2006-07-26 | 2009-11-24 | Microsoft Corporation | Data processing over very large databases |
US7962442B2 (en) * | 2006-08-31 | 2011-06-14 | International Business Machines Corporation | Managing execution of a query against selected data partitions of a partitioned database |
US8700579B2 (en) * | 2006-09-18 | 2014-04-15 | Infobright Inc. | Method and system for data compression in a relational database |
GB0625641D0 (en) | 2006-12-21 | 2007-01-31 | Symbian Software Ltd | Dynamic filtering for partially trusted servers |
JP2008181243A (en) * | 2007-01-23 | 2008-08-07 | Hitachi Ltd | Database management system for controlling setting of cache partition region of storage system |
US8156107B2 (en) * | 2007-02-02 | 2012-04-10 | Teradata Us, Inc. | System and method for join-partitioning for local computability of query over shared-nothing clusters |
US20080201303A1 (en) | 2007-02-20 | 2008-08-21 | International Business Machines Corporation | Method and system for a wizard based complex filter with realtime feedback |
US8108399B2 (en) | 2007-05-18 | 2012-01-31 | Microsoft Corporation | Filtering of multi attribute data via on-demand indexing |
US20090006347A1 (en) * | 2007-06-29 | 2009-01-01 | Lucent Technologies Inc. | Method and apparatus for conditional search operators |
US7895151B2 (en) * | 2008-06-23 | 2011-02-22 | Teradata Us, Inc. | Fast bulk loading and incremental loading of data into a database |
US8250088B2 (en) * | 2007-10-05 | 2012-08-21 | Imation Corp. | Methods for controlling remote archiving systems |
US7856434B2 (en) | 2007-11-12 | 2010-12-21 | Endeca Technologies, Inc. | System and method for filtering rules for manipulating search results in a hierarchical search and navigation system |
US8386508B2 (en) * | 2008-04-28 | 2013-02-26 | Infosys Technologies Limited | System and method for parallel query evaluation |
JP4207096B2 (en) | 2008-06-12 | 2009-01-14 | 株式会社日立製作所 | Database management method |
US8364751B2 (en) * | 2008-06-25 | 2013-01-29 | Microsoft Corporation | Automated client/server operation partitioning |
US8041714B2 (en) | 2008-09-15 | 2011-10-18 | Palantir Technologies, Inc. | Filter chains with associated views for exploring large data sets |
US8145806B2 (en) * | 2008-09-19 | 2012-03-27 | Oracle International Corporation | Storage-side storage request management |
EP2169563A1 (en) * | 2008-09-26 | 2010-03-31 | Siemens Aktiengesellschaft | Method for performing a database query in a relational database |
US8478775B2 (en) | 2008-10-05 | 2013-07-02 | Microsoft Corporation | Efficient large-scale filtering and/or sorting for querying of column based data encoded structures |
US7917463B2 (en) * | 2008-10-10 | 2011-03-29 | Business.Com, Inc. | System and method for data warehousing and analytics on a distributed file system |
US20100121869A1 (en) * | 2008-11-07 | 2010-05-13 | Yann Le Biannic | Normalizing a filter condition of a database query |
US10831724B2 (en) * | 2008-12-19 | 2020-11-10 | Bmc Software, Inc. | Method of reconciling resources in the metadata hierarchy |
US8533181B2 (en) * | 2009-04-29 | 2013-09-10 | Oracle International Corporation | Partition pruning via query rewrite |
US8103638B2 (en) * | 2009-05-07 | 2012-01-24 | Microsoft Corporation | Partitioning of contended synchronization objects |
CN101963965B (en) * | 2009-07-23 | 2013-03-20 | 阿里巴巴集团控股有限公司 | Document indexing method, data query method and server based on search engine |
US9135299B2 (en) * | 2009-09-01 | 2015-09-15 | Teradata Us, Inc. | System, method, and computer-readable medium for automatic index creation to improve the performance of frequently executed queries in a database system |
US8874600B2 (en) * | 2010-01-30 | 2014-10-28 | International Business Machines Corporation | System and method for building a cloud aware massive data analytics solution background |
JP5593792B2 (en) * | 2010-03-31 | 2014-09-24 | 富士通株式会社 | RAID device, storage control method, and storage control program |
US8606756B2 (en) * | 2010-04-09 | 2013-12-10 | Ca, Inc. | Distributed system having a shared central database |
US8386471B2 (en) * | 2010-05-27 | 2013-02-26 | Salesforce.Com, Inc. | Optimizing queries in a multi-tenant database system environment |
US20120011144A1 (en) * | 2010-07-12 | 2012-01-12 | Frederik Transier | Aggregation in parallel computation environments with shared memory |
US8392399B2 (en) * | 2010-09-16 | 2013-03-05 | Microsoft Corporation | Query processing algorithm for vertically partitioned federated database systems |
US8413145B2 (en) * | 2010-09-30 | 2013-04-02 | Avaya Inc. | Method and apparatus for efficient memory replication for high availability (HA) protection of a virtual machine (VM) |
US8818989B2 (en) * | 2010-11-30 | 2014-08-26 | International Business Machines Corporation | Memory usage query governor |
-
2010
- 2010-12-28 US US12/979,467 patent/US10311105B2/en active Active
-
2011
- 2011-12-24 WO PCT/US2011/067307 patent/WO2012092224A2/en active Application Filing
- 2011-12-24 EP EP11854242.2A patent/EP2659403A4/en not_active Ceased
- 2011-12-24 CA CA2822900A patent/CA2822900C/en active Active
- 2011-12-24 JP JP2013547600A patent/JP5990192B2/en active Active
- 2011-12-27 CN CN201110446021.2A patent/CN102682052B/en active Active
-
2013
- 2013-01-24 HK HK13101082.2A patent/HK1174111A1/en unknown
-
2019
- 2019-05-07 US US16/405,321 patent/US20190266195A1/en not_active Abandoned
Non-Patent Citations (2)
Title |
---|
ANURAG ACHARYA ET AL.: "OPERATING SYSTEMS REVIEW", vol. 32, 1 October 1998, ACM, article "Active disks", pages: 81 - 91 |
See also references of EP2659403A4 |
Also Published As
Publication number | Publication date |
---|---|
JP5990192B2 (en) | 2016-09-07 |
CN102682052A (en) | 2012-09-19 |
US20190266195A1 (en) | 2019-08-29 |
CN102682052B (en) | 2015-08-19 |
JP2014502762A (en) | 2014-02-03 |
EP2659403A2 (en) | 2013-11-06 |
CA2822900C (en) | 2020-01-07 |
HK1174111A1 (en) | 2013-05-31 |
EP2659403A4 (en) | 2017-06-07 |
CA2822900A1 (en) | 2012-07-05 |
US10311105B2 (en) | 2019-06-04 |
US20120166447A1 (en) | 2012-06-28 |
WO2012092224A3 (en) | 2012-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190266195A1 (en) | Filtering queried data on data stores | |
JP7273045B2 (en) | Dimensional Context Propagation Techniques for Optimizing SQL Query Plans | |
US11487771B2 (en) | Per-node custom code engine for distributed query processing | |
CN110795257A (en) | Method, device and equipment for processing multi-cluster operation records and storage medium | |
CA2953969C (en) | Interactive interfaces for machine learning model evaluations | |
US9672474B2 (en) | Concurrent binning of machine learning data | |
US10318882B2 (en) | Optimized training of linear machine learning models | |
US10963810B2 (en) | Efficient duplicate detection for machine learning data sets | |
US10169715B2 (en) | Feature processing tradeoff management | |
US9736270B2 (en) | Automated client/server operation partitioning | |
US8447901B2 (en) | Managing buffer conditions through sorting | |
US20070250517A1 (en) | Method and Apparatus for Autonomically Maintaining Latent Auxiliary Database Structures for Use in Executing Database Queries | |
WO2017112861A1 (en) | System and method for adaptive filtering of data requests | |
Hu et al. | Towards big linked data: a large-scale, distributed semantic data storage | |
Mondal et al. | Casqd: continuous detection of activity-based subgraph pattern queries on dynamic graphs | |
KR20160050930A (en) | Apparatus for Processing Transaction with Modification of Data in Large-Scale Distributed File System and Computer-Readable Recording Medium with Program | |
CN114003203B (en) | Maintenance method, device and equipment for activity counting variable and readable medium | |
Zhao et al. | A multidimensional OLAP engine implementation in key-value database systems | |
US12061621B2 (en) | Bulk data extract hybrid job processing | |
US11663216B2 (en) | Delta database data provisioning | |
US20240143566A1 (en) | Data processing method and apparatus, and computing system | |
Khalifa | Achieving consumable big data analytics by distributing data mining algorithms | |
Sinha | ADOPTING ANALYTICS WITH BIG DATA. | |
CN115857938A (en) | Method and device for resource audit of big data submission operation | |
Lin et al. | MapReduce Algorithm Design |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11854242 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2822900 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011854242 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2013547600 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |