US20190050725A1 - System and method for approximating query results using local and remote neural networks - Google Patents

System and method for approximating query results using local and remote neural networks Download PDF

Info

Publication number
US20190050725A1
US20190050725A1 US15/858,943 US201715858943A US2019050725A1 US 20190050725 A1 US20190050725 A1 US 20190050725A1 US 201715858943 A US201715858943 A US 201715858943A US 2019050725 A1 US2019050725 A1 US 2019050725A1
Authority
US
United States
Prior art keywords
neural network
query
local
primary
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US15/858,943
Inventor
Guy LEVY YURISTA
Adi AZARIA
Amir Orad
Nir REGEV
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sisense Inc
Original Assignee
Sisense Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sisense Inc filed Critical Sisense Inc
Priority to US15/858,943 priority Critical patent/US20190050725A1/en
Assigned to SISENSE LTD. reassignment SISENSE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEVY YURISTA, GUY, ORAD, AMIR, AZARIA, ADI, REGEV, Nir
Publication of US20190050725A1 publication Critical patent/US20190050725A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: SISENSE LTD
Assigned to SILICON VALLEY BANK, AS AGENT reassignment SILICON VALLEY BANK, AS AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: SISENSE LTD
Assigned to SISENSE LTD reassignment SISENSE LTD RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK, AS AGENT
Assigned to SISENSE LTD reassignment SISENSE LTD RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Assigned to COMERICA BANK reassignment COMERICA BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SISENSE LTD.
Assigned to SISENSE LTD. reassignment SISENSE LTD. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: COMERICA BANK
Assigned to HERCULES CAPITAL, INC. reassignment HERCULES CAPITAL, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SISENSE LTD, SISENSE SF INC.
Assigned to SISENSE LTD., SISENSE SF, INC. reassignment SISENSE LTD. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: TRIPLEPOINT VENTURE GROWTH BDC CORP
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24534Query rewriting; Transformation
    • G06F16/24549Run-time optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F17/30979
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • the present disclosure relates generally to generating query results from databases, and particularly to generating query results based on a neural network.
  • a typical approach in attempting to gain insight from data includes querying a database storing the data to get a specific result. For example, a user may generate a query (e.g., an SQL query) and the query is sent to a database management system (DBMS) that executes the query on one or more tables stored on the database.
  • DBMS database management system
  • one solution includes indexing data stored in databases.
  • Another solution includes caching results of frequent queries.
  • Yet another solution includes selectively retrieving results from the database, so that the query would be served immediately.
  • Certain embodiments disclosed herein include a method for providing local approximations of query results.
  • the method includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process.
  • the process includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • Certain embodiments disclosed herein also include a system for providing local approximations of query results.
  • the system includes a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: query a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receive from the primary neural network a predicted test result in response to the at least one test query; send, based on the predicted test result, a model of a primary neural network to a local machine; and store the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • FIG. 1A shows a network diagram of a system for approximating query results according to an embodiment.
  • FIG. 1B shows a network diagram of a system for approximating query results according to another embodiment.
  • FIG. 2 is a schematic diagram of a neural network for generating approximate results for a database query, according to an embodiment.
  • FIG. 3 is a flowchart illustrating a method for training a neural network to approximate results for queries related to a data set according to an embodiment.
  • FIG. 4 is an example graph showing real query results plotted against predictions of a neural network according to an embodiment.
  • FIG. 5 is a flowchart illustrating a method for generating a query training set for a neural network according to an embodiment.
  • FIG. 6 is a flowchart illustrating a method for generating approximate query results for a neural network according to an embodiment.
  • FIG. 1A shows a network diagram 100 of a system for approximating query results according to an embodiment.
  • a network 110 is provided, which may include the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), a mobile network, and other networks capable of enabling communication between elements of a system 100 .
  • the network 110 provides connectivity to one or more databases 120 - 1 to 120 -N, where N is an integer greater than or equal to 1, a training set generator 130 , one or more user nodes 140 - 1 to 140 -M, where M is an integer greater than or equal to 1, an approximation server 150 , and neural network 200 .
  • the one or more databases 120 - 1 through 120 -N may store one or more structured data sets.
  • a database 120 may be implemented as any one of: a distributed database, data warehouse, federated database, graph database, columnar database, and the like.
  • a database 120 may include a database management system (DBMS), not shown, which manages access to the database.
  • DBMS database management system
  • a database 120 may include one or more tables of data.
  • the network 110 is connected to the neural network (NN) 200 and, in some embodiments, the training set generator 130 .
  • the NN 200 may be implemented as a recurrent NN (RNN).
  • RNN recurrent NN
  • a plurality of NNs may be implemented.
  • a second NN may have more layers than a first NN, as described herein below.
  • the second NN may generate predictions with a higher degree of certainty (i.e., have a higher confidence level) than the first NN, while requiring more memory to store its NN model than the first NN.
  • the network 110 is further connected to a plurality of user nodes 140 - 1 through 140 -M (hereinafter referred to as user node 140 or user nodes 140 , merely for simplicity).
  • a user node 140 may be a mobile device, a smartphone, a desktop computer, a laptop computer, a tablet computer, a wearable device, an Internet of Things (IoT) device, and the like.
  • the user node 140 is configured to send a query to be executed on one or more of the databases 120 .
  • a user node 120 may send the query directly to a database 120 , to be handled, for example by the DBMS.
  • the query is sent to an approximation server 150 .
  • the training set generator 130 is configured to receive, for example from a DBMS of a database 120 , a plurality of training queries, from which to generate a training set for the neural network 200 .
  • An embodiment of the training set generator 130 is discussed in more detail with respect to FIG. 5 .
  • the approximation server 150 is configured to receive queries from the user nodes 140 , and send the received queries to be executed on the appropriate databases 120 .
  • the approximation server 150 may also be configured to provide a user node 140 with an approximate result, generated by the NN 200 . This is discussed in more detail below with respect to FIG. 2 .
  • the approximation server 150 may include a processing circuitry (not shown) that may be realized as one or more hardware logic components and circuits.
  • a processing circuitry (not shown) that may be realized as one or more hardware logic components and circuits.
  • illustrative types of hardware logic components include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.
  • the processing circuitry of the approximation server 150 is configured to include the training set generator and the neural network.
  • the approximation server 150 may further include memory (not shown) configured to store software.
  • Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry to perform the various processes described herein.
  • FIG. 1B shows a network diagram 105 of a system for approximating query results according to another embodiment.
  • multiple networks are employed for approximating the query results.
  • a first network 110 - 1 is connected to databases 120 and a second network 110 - 2 is connected to one or more user nodes (UN) 140 - 1 through 140 -M and a neural network machine 200 - 1 .
  • the first network 110 - 1 and the second network 110 - 2 are each further connected to a central link 160 .
  • the central link 160 includes an approximation server 150 , a training set generator (TSG) 130 , and a neural network 200 .
  • the approximation server 150 includes the training set generator 130 and the neural network 200 .
  • additional networks 110 - i are further connected to the central link 160 .
  • each additional network 110 - i is connected to one or more user nodes 140 -L through 140 -J and a local neural network machine 200 -K.
  • ‘M’, ‘N’, ‘i’, ‘J’, ‘K’ and ‘L’ are integers greater than or equal to 1.
  • the second network 110 - 2 and each additional network 110 - i may include local networks, such as, but not limited to, virtual private network (VPNs), local area network (LANs), and the like.
  • Each local network includes a local NN machine 200 - 1 through 200 -K for storing a NN model, which is generated by the approximation server 150 .
  • a NN model may be stored on one or more of the user nodes 140 - 1 through 140 -M, which are communicatively connected to the local network 110 - 1 .
  • the user node 140 - 1 is configured to send a query to be executed on one or more of the databases 120 either directly (not shown) or via the approximation server 150 of the central link 160 .
  • the approximation server 150 may be configured to provide the user node 140 - 1 with an approximate result generated by the NN 200 .
  • a first NN and second NN are trained on a data set of one or more databases 120 .
  • the first NN may include fewer layers and neurons than the second NN.
  • the first NN may be stored in one or more local NN machines 200 - 1 through 200 -K, such as local NN machine 200 - 1
  • the second NN may be stored on the approximation server 150 , e.g. the neural network 200 .
  • the approximation server 150 will then provide a second predicted result having a greater accuracy than the initial predicted result.
  • the approximation server 150 may send the query for execution on the data set from the database 120 , and provide the real result to the user node 140 - 1 .
  • the first NN is executed on a user node only if the user node has the computational resources, e.g., sufficient processing power and memory, to efficiently execute the query on the second neural network. If not, then the user node may be configured to either access a local machine (e.g., a dedicated machine, or another user node on the local network) to generate predictions from a local neural network, or be directed to the approximation server 150 of the central link 160 .
  • a local machine e.g., a dedicated machine, or another user node on the local network
  • the arrangement discussed with reference to FIG. 1B allows for a more efficient response to a query than sending a query directly to a database 120 for a real result, as the time required to receive a result is reduced by providing an initial approximation, a more accurate second approximation, and finally a real result.
  • using a plurality of neural networks allows for increasingly accurate results to be obtained.
  • implementing the neural network on a local machine or even a user device allows for significant reduction of the storage required to maintain the data, as the NN model stored on the local machine typically includes less layer (e.g., in comparison to a NN implemented on a server).
  • the neural network on a local machine requires less memory to store its NN model than the second NN.
  • a typical size of neural network can be in the tens of megabytes comparable to a database (maintaining substantially the same data) having a size of hundreds of gigabytes.
  • FIG. 2 is a schematic diagram of a neural network (NN) 200 for generating approximate results for a database query, according to an embodiment.
  • a neural network 200 includes an input numerical translator matrix 205 .
  • the input numerical translator matrix 205 is configured to receive a query and translate the query to a numerical representation which can be fed into an input neuron 215 of a neural network 200 .
  • the input numerical translator matrix 205 is configured to determine what elements, such as predicates and expressions, are present in the received query. In an embodiment, each element is mapped by an injective function to a unique numerical representation. For example, the input numerical translator matrix 205 may receive a query and generate, for each unique query, a unique vector. The unique vectors may be fed as input to one or more of input neurons 215 , which together form an input layer 210 of the neural network 200 .
  • Each neuron (also referred to as a node) of the neural network 200 is configured to apply a function to its input, sending the output of the function forward (e.g., to another neuron), and may include a weight function.
  • a weight function of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network.
  • the neural network 200 further includes a plurality of hidden neurons 225 in a hidden layer 220 .
  • a single hidden layer 220 is shown, however a plurality of hidden layers may be implemented, without departing from the scope of the disclosed embodiments.
  • the neural network 200 is configured such that each output of an input neuron 215 of the input layer 210 is used as an input of one or more hidden neurons 225 in the hidden layer 220 . Typically, all outputs of the input neurons 215 are used as inputs to all the hidden neurons 225 of the hidden layer 220 . In embodiments where a plurality of hidden layers is implemented, the output of the input layer 210 is used as the input for the hidden neurons of a first hidden layer.
  • the neural network 220 further includes an output layer 230 , which includes one or more output neurons 235 .
  • the output of the hidden layer 220 is the input of the output layer 230 .
  • the output of the final hidden layer is the input of the output layer's 230 output neurons 235 .
  • the output neurons 235 of the output layer 230 may provide a result to an output numerical translator matrix 206 , which is configured to translate the output of the output layer 230 from a numerical representation to a query result. The result may then be sent to a user node which has sent the query.
  • the neural network 200 may be stored on one or more user nodes (e.g., the user nodes 140 of FIGS. 1A and 1B ). This could allow, for example, a user of the user node to receive a response to a query faster than if the query was sent to be executed on a remote database. Additionally, the neural network 220 can be stored within a central link (e.g., the central link 160 of FIG. 1 ), where multiple local network can access the neural network 200 . While the result may not always be as accurate as querying the data directly for real results, using the neural network 200 allows the user of a user node to receive a faster response with an approximated result above a predetermined level of accuracy.
  • a central link e.g., the central link 160 of FIG. 1
  • a neural network 200 may be trained by executing a number of training queries and comparing the results from the neural network 200 to real results determined from querying a database directly. The training of a neural network 200 is discussed in further detail below regarding FIG. 5 .
  • a neural network 200 may further include a version identifier, indicating the amount of training data the neural network 200 has received. For example, a higher version number may indicate that a first neural network contains a more up-to-date training than a second neural network with a lower version number.
  • the up-to-date version of a neural network may be stored on an approximation server, e.g., the approximation server 150 of FIGS. 1A and 1B .
  • a user node may periodically poll the approximation server to check if there is an updated version of the neural network.
  • the approximation server may push a notification to one or more user nodes to indicate that a new version on the neural network is available, and downloadable over a network connection.
  • the approximation server may have stored therein a plurality of trained neural networks, wherein each neural network is trained on a different data set. While a plurality of neural networks may be trained on different data sets, it is understood that some overlap may occur between data sets.
  • the neural network discussed with reference to FIG. 1 can be executed over or realized by a general purpose or dedicated hardware.
  • Examples for such hardware include analog or digital neuro-computers.
  • Such computers may be realized using any one of a combination of electronic components, optical components a von-Neumann multiprocessor, a graphical processing unit (GPU), a vector processor, an array processor, a tensor processing unit, and the like.
  • FIG. 3 is a flowchart illustrating a method 300 for training a neural network to approximate results for queries related to a data set according to an embodiment.
  • a batch of training query pairs is received, e.g., by the approximation server 150 of FIGS. 1A and 1B .
  • the batch of training queries is generated by a training set generator and includes a plurality of both queries and their matching real results.
  • the real result of the query may be previously generated by executing the query directly on a data set.
  • the query includes query elements, which include, for example, predicates and expressions.
  • a real result of the query may be, for example, an alphanumerical string or a calculated number value.
  • a query may be related to all, or part, of a data set.
  • a query directed to a columnar database may be executed based on a subset of columns of a table which does not include all columns of the table.
  • the query pairs are vectorized to a format which the neural network is able to process, for example, by an input numerical translator matrix (e.g., as shown in FIG. 2 ).
  • the batch of training queries is fed to a neural network to generate a predicted result for each query.
  • the neural network is configured to receive a batch of training queries, where a plurality of batches is called an epoch.
  • the queries may be fed through one or more layers within the neural network. For example, a query may be fed through an input layer, a hidden layer, and an output layer.
  • each query is first fed to an input numerical translator matrix to determine elements present within the query.
  • Each element is mapped, e.g., by an injective function, to a numerical representation, such as a vector.
  • the vectors may be fed to one or more neurons, where each neuron is configured to apply a function to the vector, where the function includes at least a weight function.
  • the weight function determines the contribution of each neuron function toward a final query predicted result.
  • a comparison is made between the predicted result of a query and the real result of that query.
  • the comparison includes determining the differences between the predicted result and the real result. For example, if the real result is a number value, the comparison includes calculating the difference between a number output value from the predicted result and the number value of the real result.
  • the weight of a neuron is adjusted via a weight function.
  • the weight of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network. Adjusting weights may be performed, for example, by methods of back propagation.
  • One example for such a method is a “backward propagation of errors,” which is an algorithm for supervised learning of neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights.
  • training will end if an epoch has been fully processed, i.e., if the entire plurality of batches has been processed via the neural network. If the epoch has not ended, execution continues at S 310 where a new batch is fed to the neural network; otherwise execution terminates. In some embodiments, a check is performed to determine the number of epochs which the system has processed. The system may generate a target number of epochs to train the neural network with, based on the amount of training queries generated, the variance of the data set, the size of the data set, and the like.
  • FIG. 4 is an example graph 400 showing real query results 420 plotted against predictions 410 of a neural network according to an embodiment.
  • An ideal version of the graph should be linear, where the neural network would accurately predict a real result.
  • the graph should converge to a linear function.
  • the determination to continue the training process may include plotting the predicted results against the real results, determining a function, and performing a regression of that function, to determine if it is sufficiently linear.
  • Sufficiently linear may be, for example, if the R 2 value of the regression is above a predetermined threshold.
  • FIG. 5 is a flowchart illustrating a method 500 for generating a query training set for a neural network according to an embodiment.
  • the training of neural networks is required in order to provide sufficiently accurately results, where sufficiently accurate results are a percentage of predicted results matching the real results above a predetermine threshold.
  • Training a neural network involves exposing the neural network to a plurality of training conditions and their previously calculated real results. This allows the neural network to adjust weight functions of the neurons within the neural network.
  • training sets having both a sufficient depth of data e.g., queries which require different areas of data for their results, take variance into account, and the like
  • a sufficiently large quantity of query examples are not always available. Therefore, it may be advantageous to generate a qualified training set.
  • An exemplary method is discussed herein.
  • a first set of queries is received, e.g., by a training set generator.
  • the first set of queries may be queries that have been generated by one or more users, for example through user nodes.
  • this first set of queries does not include enough queries to train a neural network to a point where the predictions are sufficiently accurate.
  • a variable element of a first query of the first plurality of queries is determined.
  • a query may be the following:
  • a variance of the data set is determined, where a variance includes a subset of the determined variable.
  • the real full data set may have values ranging between 0 and 1,000.
  • querying for the sum of income values between 18 and 79 may not be representative of the sum of income for the entire data set, which would bias the NN model.
  • the variance of the training queries is determined to take into account this potential bias.
  • a training query is generated based on the determined variable and the variance thereof.
  • the following query will be generated:
  • the determination may be based on, for example, whether a total number of queries (real and generated) has exceeded a predetermined threshold, whether the total number of generated queries is above a threshold, and the like. For example, it may be determined if the training queries are equal to a representative sample of the data set (i.e. queries that are directed to all portions of the data, or to a number of portions of the data above a predetermined threshold). In another example, it may be determined if additional variance is required for certain predicates.
  • the training queries are provided to the input layer of the neural networks for training.
  • the training queries are executed on the data set, to generate a query pair which includes the query and a real result thereof.
  • the training queries and real results are then vectorized to a matrix representation which the neural network is fed (as described in more detail with respect to FIG. 3 ).
  • FIG. 6 is a flowchart illustrating a method 600 for generating approximate query results for a neural network according to an embodiment.
  • a query is received for execution on a data set.
  • the query may be received by an approximation server from a user node, e.g., the approximation server 150 and user node 140 of FIG. 1 .
  • the query is sent to a trained neural network.
  • it is determined if the neural network is trained to provide a sufficiently accurate response to the received query. This may be based on, for example, a version number of the neural network indicating the training level thereof.
  • a determination is performed to ascertain if the query should be executed on the data set. In some embodiments, it may be advantageous to first supply an approximate answer immediately as the query is received, while additionally computing the real result of the query on the data set. This determination may be based on, for example, the version number of the neural network, the resources available to run the query through the neural network, the time required to execute the query, and so on. If a real result is determined to be provided, execution continues at S 640 ; otherwise execution continues at S 635 .
  • a first result, or a predicted result is provided, e.g., sent to a user node where the query was received from.
  • the predicted result is provided, e.g., to the user node where the query was received from, while the query is executing on a relevant one or more data sets to determine the real results thereof.
  • Execution may include sending all or part of the query to a DBMS of a database for execution thereon.
  • a second result, or an updated result is provided, e.g., to the user node, where the update is based on the calculated real results.
  • a notification may be provided to indicate that the result has been updated from an approximate and predicted result to real result.
  • the notification may be a textual notification, a visual notification (such as the text or background of the notification changing colors), and the like.
  • the query and real result are sent to the neural network as an input to the input layer of the neural network.
  • the neural network may be trained based on its latest state, i.e., its version number.
  • the version number may be updated every time the neural network is trained based on the real result and the predicted result.
  • an approximation server of the neural network receives a plurality of queries and their real results, e.g., from S 640 , and stores them for periodically training the neural network.
  • the query and result may be used by a training set generator to generate another set of training queries.
  • the version number may be updated each time the neural network is retrained. A copy with a version number of the neural network may be stored on any of the devices discussed with respect to FIGS. 1A and 1B above.
  • the received query may be provided to a plurality of neural networks to be executed on each of their models, e.g., at S 620 , where at least two NN of the plurality of NNs differ from each other in the number of layers and/or neurons.
  • a first neural network will receive the query and generate a first predicted result.
  • the first predicted result may be sent to a user node, sent to a dashboard, report, and the like.
  • the query is sent to a second neural network that has more layers, neurons, or both, than the first neural network.
  • the result available to the user node may be updated, e.g., at S 650 .
  • a loss function may be determined and a result thereof generated, for example by the approximation server.
  • a loss function may be, for example, a root mean squared error.
  • the loss function may be used to determine a confidence level of the predication of a neural network.
  • it may be desirable to provide the query to the “leanest” neural network (i.e. the NN with the least a number of layers, neurons, or both), which would require less computational resources.
  • a confidence level may be determined for the prediction, and if it falls below a threshold (i.e., the confidence level is too low) then the query may be provided to the next neural network, which would require more computational resources than the first NN, but may require less computational resources than a third NN or than executing the query on the data set itself to generate real results.
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.

Abstract

A system and method for providing local approximations of query results are provided. The method includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of the following applications:
      • U.S. Provisional Application No. 62/545,046 filed on Aug. 14, 2017;
      • U.S. Provisional Application No. 62/545,050 filed on Aug. 14, 2017;
      • U.S. Provisional Application No. 62/545,053 filed on Aug. 14, 2017; and
      • U.S. Provisional Application No. 62/545,058 filed on Aug. 14, 2017.
        All of the applications referenced above are herein incorporated by reference.
    TECHNICAL FIELD
  • The present disclosure relates generally to generating query results from databases, and particularly to generating query results based on a neural network.
  • BACKGROUND
  • It is becoming increasingly more resource intensive to produce useful results from the growing amount of data generated by individuals and organizations. Business organizations in particular can generate petabytes of data and could benefit greatly from mining such data to extract useful insights from their generated data that is automatically gathered and stored in the course of usual business operations.
  • A typical approach in attempting to gain insight from data includes querying a database storing the data to get a specific result. For example, a user may generate a query (e.g., an SQL query) and the query is sent to a database management system (DBMS) that executes the query on one or more tables stored on the database. This is a relatively simple case; however, with organizations relying on a multitude of vendors for managing their data, each with their own technology for storing data, retrieving useful insights from data is becoming ever increasingly complex. It is also not uncommon for queries to take several minutes, or even hours, to complete when applied to vast amount of stored data.
  • The advantages to speeding up the process are clear, and some solutions attempt to accelerate access to the databases. For example, one solution includes indexing data stored in databases. Another solution includes caching results of frequent queries. Yet another solution includes selectively retrieving results from the database, so that the query would be served immediately.
  • However, while these database optimization and acceleration solutions are useful in analyzing databases of a certain size or known data sets, they can fall short of providing useful information when applied to large and unknown data sets, which may include data that an indexing or caching algorithm has not been programmed to process.
  • It would therefore be advantageous to provide a solution that would overcome the challenges noted above.
  • SUMMARY
  • A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
  • Certain embodiments disclosed herein include a method for providing local approximations of query results. The method includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process. The process includes querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receiving from the primary neural network a predicted test result in response to the at least one test query; sending, based on the predicted test result, a model of a primary neural network to a local machine; and storing the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • Certain embodiments disclosed herein also include a system for providing local approximations of query results. The system includes a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: query a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set; receive from the primary neural network a predicted test result in response to the at least one test query; send, based on the predicted test result, a model of a primary neural network to a local machine; and store the model of a local neural network of the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • FIG. 1A shows a network diagram of a system for approximating query results according to an embodiment.
  • FIG. 1B shows a network diagram of a system for approximating query results according to another embodiment.
  • FIG. 2 is a schematic diagram of a neural network for generating approximate results for a database query, according to an embodiment.
  • FIG. 3 is a flowchart illustrating a method for training a neural network to approximate results for queries related to a data set according to an embodiment.
  • FIG. 4 is an example graph showing real query results plotted against predictions of a neural network according to an embodiment.
  • FIG. 5 is a flowchart illustrating a method for generating a query training set for a neural network according to an embodiment.
  • FIG. 6 is a flowchart illustrating a method for generating approximate query results for a neural network according to an embodiment.
  • DETAILED DESCRIPTION
  • It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
  • FIG. 1A shows a network diagram 100 of a system for approximating query results according to an embodiment. A network 110 is provided, which may include the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), a mobile network, and other networks capable of enabling communication between elements of a system 100. The network 110 provides connectivity to one or more databases 120-1 to 120-N, where N is an integer greater than or equal to 1, a training set generator 130, one or more user nodes 140-1 to 140-M, where M is an integer greater than or equal to 1, an approximation server 150, and neural network 200.
  • The one or more databases 120-1 through 120-N (hereinafter referred to as database 120 or databases 120, merely for simplicity) may store one or more structured data sets. In some embodiments, a database 120 may be implemented as any one of: a distributed database, data warehouse, federated database, graph database, columnar database, and the like. A database 120 may include a database management system (DBMS), not shown, which manages access to the database. In certain embodiments, a database 120 may include one or more tables of data.
  • The network 110 is connected to the neural network (NN) 200 and, in some embodiments, the training set generator 130. The NN 200 may be implemented as a recurrent NN (RNN). In an embodiment, a plurality of NNs may be implemented. For example, a second NN may have more layers than a first NN, as described herein below. The second NN may generate predictions with a higher degree of certainty (i.e., have a higher confidence level) than the first NN, while requiring more memory to store its NN model than the first NN.
  • The network 110 is further connected to a plurality of user nodes 140-1 through 140-M (hereinafter referred to as user node 140 or user nodes 140, merely for simplicity). A user node 140 may be a mobile device, a smartphone, a desktop computer, a laptop computer, a tablet computer, a wearable device, an Internet of Things (IoT) device, and the like. The user node 140 is configured to send a query to be executed on one or more of the databases 120. In an embodiment, a user node 120 may send the query directly to a database 120, to be handled, for example by the DBMS. In a further embodiment, the query is sent to an approximation server 150.
  • The training set generator 130 is configured to receive, for example from a DBMS of a database 120, a plurality of training queries, from which to generate a training set for the neural network 200. An embodiment of the training set generator 130 is discussed in more detail with respect to FIG. 5.
  • In an embodiment, the approximation server 150 is configured to receive queries from the user nodes 140, and send the received queries to be executed on the appropriate databases 120. The approximation server 150 may also be configured to provide a user node 140 with an approximate result, generated by the NN 200. This is discussed in more detail below with respect to FIG. 2.
  • The approximation server 150 may include a processing circuitry (not shown) that may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information. In a further embodiment, the processing circuitry of the approximation server 150 is configured to include the training set generator and the neural network.
  • In an embodiment, the approximation server 150 may further include memory (not shown) configured to store software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry to perform the various processes described herein.
  • FIG. 1B shows a network diagram 105 of a system for approximating query results according to another embodiment. In the shown embodiment, multiple networks are employed for approximating the query results. A first network 110-1 is connected to databases 120 and a second network 110-2 is connected to one or more user nodes (UN) 140-1 through 140-M and a neural network machine 200-1. The first network 110-1 and the second network 110-2 are each further connected to a central link 160.
  • In one embodiment, the central link 160 includes an approximation server 150, a training set generator (TSG) 130, and a neural network 200. In a further embodiment, the approximation server 150 includes the training set generator 130 and the neural network 200. In other deployments, additional networks 110-i are further connected to the central link 160. Specifically, each additional network 110-i is connected to one or more user nodes 140-L through 140-J and a local neural network machine 200-K. In the exemplary embodiment, ‘M’, ‘N’, ‘i’, ‘J’, ‘K’ and ‘L’ are integers greater than or equal to 1.
  • The second network 110-2 and each additional network 110-i may include local networks, such as, but not limited to, virtual private network (VPNs), local area network (LANs), and the like. Each local network includes a local NN machine 200-1 through 200-K for storing a NN model, which is generated by the approximation server 150. In an example, a NN model may be stored on one or more of the user nodes 140-1 through 140-M, which are communicatively connected to the local network 110-1. The user node 140-1 is configured to send a query to be executed on one or more of the databases 120 either directly (not shown) or via the approximation server 150 of the central link 160. The approximation server 150 may be configured to provide the user node 140-1 with an approximate result generated by the NN 200.
  • In some embodiments, a first NN and second NN are trained on a data set of one or more databases 120. For example, the first NN may include fewer layers and neurons than the second NN. The first NN may be stored in one or more local NN machines 200-1 through 200-K, such as local NN machine 200-1, and the second NN may be stored on the approximation server 150, e.g. the neural network 200. When a user node 140-1 sends a query for execution, the first NN stored on local NN machine 200-1 may provide an initial predicted result to the user node 140-1. The approximation server 150 will then provide a second predicted result having a greater accuracy than the initial predicted result. In some embodiments, the approximation server 150 may send the query for execution on the data set from the database 120, and provide the real result to the user node 140-1.
  • In an embodiment, the first NN is executed on a user node only if the user node has the computational resources, e.g., sufficient processing power and memory, to efficiently execute the query on the second neural network. If not, then the user node may be configured to either access a local machine (e.g., a dedicated machine, or another user node on the local network) to generate predictions from a local neural network, or be directed to the approximation server 150 of the central link 160.
  • It should be appreciated that the arrangement discussed with reference to FIG. 1B allows for a more efficient response to a query than sending a query directly to a database 120 for a real result, as the time required to receive a result is reduced by providing an initial approximation, a more accurate second approximation, and finally a real result. It should be further appreciated that using a plurality of neural networks allows for increasingly accurate results to be obtained. It should be further noted that implementing the neural network on a local machine or even a user device allows for significant reduction of the storage required to maintain the data, as the NN model stored on the local machine typically includes less layer (e.g., in comparison to a NN implemented on a server). The neural network on a local machine requires less memory to store its NN model than the second NN. For example, a typical size of neural network can be in the tens of megabytes comparable to a database (maintaining substantially the same data) having a size of hundreds of gigabytes.
  • FIG. 2 is a schematic diagram of a neural network (NN) 200 for generating approximate results for a database query, according to an embodiment. A neural network 200 includes an input numerical translator matrix 205. The input numerical translator matrix 205 is configured to receive a query and translate the query to a numerical representation which can be fed into an input neuron 215 of a neural network 200.
  • The input numerical translator matrix 205 is configured to determine what elements, such as predicates and expressions, are present in the received query. In an embodiment, each element is mapped by an injective function to a unique numerical representation. For example, the input numerical translator matrix 205 may receive a query and generate, for each unique query, a unique vector. The unique vectors may be fed as input to one or more of input neurons 215, which together form an input layer 210 of the neural network 200.
  • Each neuron (also referred to as a node) of the neural network 200 is configured to apply a function to its input, sending the output of the function forward (e.g., to another neuron), and may include a weight function. A weight function of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network.
  • The neural network 200 further includes a plurality of hidden neurons 225 in a hidden layer 220. In this exemplary embodiment, a single hidden layer 220 is shown, however a plurality of hidden layers may be implemented, without departing from the scope of the disclosed embodiments.
  • In an embodiment, the neural network 200 is configured such that each output of an input neuron 215 of the input layer 210 is used as an input of one or more hidden neurons 225 in the hidden layer 220. Typically, all outputs of the input neurons 215 are used as inputs to all the hidden neurons 225 of the hidden layer 220. In embodiments where a plurality of hidden layers is implemented, the output of the input layer 210 is used as the input for the hidden neurons of a first hidden layer.
  • The neural network 220 further includes an output layer 230, which includes one or more output neurons 235. The output of the hidden layer 220 is the input of the output layer 230. In an embodiment where a plurality of hidden layers is implemented, the output of the final hidden layer is the input of the output layer's 230 output neurons 235. In some embodiments, the output neurons 235 of the output layer 230 may provide a result to an output numerical translator matrix 206, which is configured to translate the output of the output layer 230 from a numerical representation to a query result. The result may then be sent to a user node which has sent the query.
  • In some embodiments, the neural network 200 may be stored on one or more user nodes (e.g., the user nodes 140 of FIGS. 1A and 1B). This could allow, for example, a user of the user node to receive a response to a query faster than if the query was sent to be executed on a remote database. Additionally, the neural network 220 can be stored within a central link (e.g., the central link 160 of FIG. 1), where multiple local network can access the neural network 200. While the result may not always be as accurate as querying the data directly for real results, using the neural network 200 allows the user of a user node to receive a faster response with an approximated result above a predetermined level of accuracy.
  • A neural network 200 may be trained by executing a number of training queries and comparing the results from the neural network 200 to real results determined from querying a database directly. The training of a neural network 200 is discussed in further detail below regarding FIG. 5. In an embodiment, a neural network 200 may further include a version identifier, indicating the amount of training data the neural network 200 has received. For example, a higher version number may indicate that a first neural network contains a more up-to-date training than a second neural network with a lower version number. The up-to-date version of a neural network may be stored on an approximation server, e.g., the approximation server 150 of FIGS. 1A and 1B.
  • In an embodiment, a user node may periodically poll the approximation server to check if there is an updated version of the neural network. In another embodiment, the approximation server may push a notification to one or more user nodes to indicate that a new version on the neural network is available, and downloadable over a network connection. In some embodiments, the approximation server may have stored therein a plurality of trained neural networks, wherein each neural network is trained on a different data set. While a plurality of neural networks may be trained on different data sets, it is understood that some overlap may occur between data sets.
  • It should be noted that the neural network discussed with reference to FIG. 1 can be executed over or realized by a general purpose or dedicated hardware. Examples for such hardware include analog or digital neuro-computers. Such computers may be realized using any one of a combination of electronic components, optical components a von-Neumann multiprocessor, a graphical processing unit (GPU), a vector processor, an array processor, a tensor processing unit, and the like.
  • FIG. 3 is a flowchart illustrating a method 300 for training a neural network to approximate results for queries related to a data set according to an embodiment. At S310, a batch of training query pairs is received, e.g., by the approximation server 150 of FIGS. 1A and 1B. In an embodiment, the batch of training queries is generated by a training set generator and includes a plurality of both queries and their matching real results. The real result of the query may be previously generated by executing the query directly on a data set. The query includes query elements, which include, for example, predicates and expressions. A real result of the query may be, for example, an alphanumerical string or a calculated number value. In an embodiment, a query may be related to all, or part, of a data set. For example, a query directed to a columnar database may be executed based on a subset of columns of a table which does not include all columns of the table. Typically, the query pairs are vectorized to a format which the neural network is able to process, for example, by an input numerical translator matrix (e.g., as shown in FIG. 2).
  • At S320, the batch of training queries is fed to a neural network to generate a predicted result for each query. The neural network is configured to receive a batch of training queries, where a plurality of batches is called an epoch. The queries may be fed through one or more layers within the neural network. For example, a query may be fed through an input layer, a hidden layer, and an output layer. In an embodiment, each query is first fed to an input numerical translator matrix to determine elements present within the query. Each element is mapped, e.g., by an injective function, to a numerical representation, such as a vector. The vectors may be fed to one or more neurons, where each neuron is configured to apply a function to the vector, where the function includes at least a weight function. In an example embodiment, the weight function determines the contribution of each neuron function toward a final query predicted result.
  • At S330, a comparison is made between the predicted result of a query and the real result of that query. The comparison includes determining the differences between the predicted result and the real result. For example, if the real result is a number value, the comparison includes calculating the difference between a number output value from the predicted result and the number value of the real result.
  • At S340, a determination is made if a weight of one or more of the neurons of the neural network should be adjusted. The determination may be made, for example, if the difference between the first predicted result and the first real result exceeds a first threshold. For example, if the difference of number value exceeds 15%, it may be determined to exceed the first threshold. If it is determined that the weight should be adjusted, execution continues at S350, otherwise execution continues at S360.
  • At S350, the weight of a neuron is adjusted via a weight function. The weight of a neuron determines the amount of contribution a single neuron has on the eventual output of the neural network. The higher a weight value is, the more effect the neuron's computation carries on the output of the neural network. Adjusting weights may be performed, for example, by methods of back propagation. One example for such a method is a “backward propagation of errors,” which is an algorithm for supervised learning of neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights.
  • At S360, it is determined if the training for the neural network should continue. In an embodiment, training will end if an epoch has been fully processed, i.e., if the entire plurality of batches has been processed via the neural network. If the epoch has not ended, execution continues at S310 where a new batch is fed to the neural network; otherwise execution terminates. In some embodiments, a check is performed to determine the number of epochs which the system has processed. The system may generate a target number of epochs to train the neural network with, based on the amount of training queries generated, the variance of the data set, the size of the data set, and the like.
  • FIG. 4 is an example graph 400 showing real query results 420 plotted against predictions 410 of a neural network according to an embodiment. An ideal version of the graph should be linear, where the neural network would accurately predict a real result. As the neural network is trained, i.e., as batches of training query pairs are processed by the neural network, the graph should converge to a linear function.
  • In the exemplary embodiment of the method presented in FIG. 3, at S360, the determination to continue the training process may include plotting the predicted results against the real results, determining a function, and performing a regression of that function, to determine if it is sufficiently linear. Sufficiently linear may be, for example, if the R2 value of the regression is above a predetermined threshold.
  • FIG. 5 is a flowchart illustrating a method 500 for generating a query training set for a neural network according to an embodiment. As noted above, the training of neural networks is required in order to provide sufficiently accurately results, where sufficiently accurate results are a percentage of predicted results matching the real results above a predetermine threshold. Training a neural network involves exposing the neural network to a plurality of training conditions and their previously calculated real results. This allows the neural network to adjust weight functions of the neurons within the neural network.
  • Typically, a large training set is required to achieve accurate results. However, training sets having both a sufficient depth of data (e.g., queries which require different areas of data for their results, take variance into account, and the like), and a sufficiently large quantity of query examples are not always available. Therefore, it may be advantageous to generate a qualified training set. An exemplary method is discussed herein.
  • At S510, a first set of queries is received, e.g., by a training set generator. The first set of queries may be queries that have been generated by one or more users, for example through user nodes. Typically, this first set of queries does not include enough queries to train a neural network to a point where the predictions are sufficiently accurate.
  • At S520, a variable element of a first query of the first plurality of queries is determined. For example, a query may be the following:
      • select sum(Income) from data where sales between 18 and 79
        where the variable ‘sales’ has a value between 18 and 79.
  • At S530, a variance of the data set is determined, where a variance includes a subset of the determined variable. Following the above example where the variable ‘sales’ has a value between 18 and 79, the real full data set may have values ranging between 0 and 1,000. Thus, querying for the sum of income values between 18 and 79 may not be representative of the sum of income for the entire data set, which would bias the NN model. In order to avoid this, the variance of the training queries is determined to take into account this potential bias.
  • At S540, a training query is generated based on the determined variable and the variance thereof. In the above example, the following query will be generated:
      • select sum(Income) from data where sales between 24 and 82
        or, as another example:
      • select average(Income) from data where sales between 312 and 735
        In the above examples, the training set generator determines a predicate. A predicate is a term for an expression which is used to determine if the query will return true or false, e.g., the result the query is requesting, and generates the training query based on the predicate, the variable, and the variance of the query.
  • At S550, a determination is made whether to generate another training query. If so, execution continues at S520; otherwise execution continues at S560. The determination may be based on, for example, whether a total number of queries (real and generated) has exceeded a predetermined threshold, whether the total number of generated queries is above a threshold, and the like. For example, it may be determined if the training queries are equal to a representative sample of the data set (i.e. queries that are directed to all portions of the data, or to a number of portions of the data above a predetermined threshold). In another example, it may be determined if additional variance is required for certain predicates.
  • At S560, the training queries are provided to the input layer of the neural networks for training. Typically, the training queries are executed on the data set, to generate a query pair which includes the query and a real result thereof. The training queries and real results are then vectorized to a matrix representation which the neural network is fed (as described in more detail with respect to FIG. 3).
  • FIG. 6 is a flowchart illustrating a method 600 for generating approximate query results for a neural network according to an embodiment. At S610, a query is received for execution on a data set. The query may be received by an approximation server from a user node, e.g., the approximation server 150 and user node 140 of FIG. 1.
  • At S620, the query is sent to a trained neural network. In an embodiment, it is determined if the neural network is trained to provide a sufficiently accurate response to the received query. This may be based on, for example, a version number of the neural network indicating the training level thereof.
  • At S630, a determination is performed to ascertain if the query should be executed on the data set. In some embodiments, it may be advantageous to first supply an approximate answer immediately as the query is received, while additionally computing the real result of the query on the data set. This determination may be based on, for example, the version number of the neural network, the resources available to run the query through the neural network, the time required to execute the query, and so on. If a real result is determined to be provided, execution continues at S640; otherwise execution continues at S635.
  • At S635, a first result, or a predicted result, is provided, e.g., sent to a user node where the query was received from.
  • At S640, the predicted result is provided, e.g., to the user node where the query was received from, while the query is executing on a relevant one or more data sets to determine the real results thereof. Execution may include sending all or part of the query to a DBMS of a database for execution thereon.
  • At S650, a second result, or an updated result, is provided, e.g., to the user node, where the update is based on the calculated real results. In an embodiment, a notification may be provided to indicate that the result has been updated from an approximate and predicted result to real result. The notification may be a textual notification, a visual notification (such as the text or background of the notification changing colors), and the like.
  • At S660, it is determined whether to use the real result to further train the neural network. For example, if the difference between the real result and the predicted result is below a second threshold, it may be determined not to train the neural network, as the results are sufficiently accurate. Alternatively, it may be determined that the same result should be used for training, even if below a second threshold, in order to reinforce the quality of the prediction. If a real result is to be used for training, execution continues at S670; otherwise execution terminates.
  • At S670 the query and real result are sent to the neural network as an input to the input layer of the neural network. The neural network may be trained based on its latest state, i.e., its version number. The version number may be updated every time the neural network is trained based on the real result and the predicted result.
  • In an example embodiment, an approximation server of the neural network receives a plurality of queries and their real results, e.g., from S640, and stores them for periodically training the neural network. In another example embodiment, the query and result may be used by a training set generator to generate another set of training queries. In certain embodiments where the neural network further includes a version number, the version number may be updated each time the neural network is retrained. A copy with a version number of the neural network may be stored on any of the devices discussed with respect to FIGS. 1A and 1B above.
  • In an embodiment, the received query may be provided to a plurality of neural networks to be executed on each of their models, e.g., at S620, where at least two NN of the plurality of NNs differ from each other in the number of layers and/or neurons. For example, a first neural network will receive the query and generate a first predicted result. The first predicted result may be sent to a user node, sent to a dashboard, report, and the like. In parallel, or subsequently, the query is sent to a second neural network that has more layers, neurons, or both, than the first neural network.
  • Upon receiving a second predicted result from the second neural network, the result available to the user node may be updated, e.g., at S650. In certain embodiments, a loss function may be determined and a result thereof generated, for example by the approximation server. A loss function may be, for example, a root mean squared error. The loss function may be used to determine a confidence level of the predication of a neural network. In an embodiment, it may be desirable to provide the query to the “leanest” neural network (i.e. the NN with the least a number of layers, neurons, or both), which would require less computational resources.
  • A confidence level may be determined for the prediction, and if it falls below a threshold (i.e., the confidence level is too low) then the query may be provided to the next neural network, which would require more computational resources than the first NN, but may require less computational resources than a third NN or than executing the query on the data set itself to generate real results.
  • The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims (21)

1. A method for providing local approximations of query results, comprising:
querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set;
receiving from the primary neural network a predicted test result in response to the at least one test query;
sending, based on the predicted test result, a model of a local neural network to a local machine; and
storing the model of the local neural network on the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
2. The method of claim 1, further comprising:
training the primary neural network.
3. The method of claim 2, further comprising:
providing, to a plurality of input neurons of the primary neural network, a plurality of training queries and their respective real results.
4. The method of claim 3, wherein the primary neural network further includes a plurality of hidden neurons and a plurality of output neurons.
5. The method of claim 1, wherein the local machine is a user node.
6. The method of claim 1, wherein the model of the local neural network further includes a version indicator.
7. The method of claim 6, further comprising:
sending the local machine an updated model of the local neural network in response to receiving from the local machine a version indicator of the local neural network model that is lower than a version indicator of the primary neural network.
8. The method of claim 1, wherein the primary neural network is stored on a server accessible to user nodes over a network.
9. The method of claim 1, wherein the primary neural network includes a number of neurons and layers greater than the local neural network.
10. The method of claim 1, further comprising:
providing the user query to the primary neural network;
generating, in response to the user query, a predicted result by the primary neural network; and
sending the predicted result from the primary neural network to the user node.
11. The method of claim 1, wherein the test query is a training query.
12. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process for providing local approximations of query results, the process comprising:
querying a primary neural network with at least one test query, wherein the at least one test query includes a real test result derived from executing the at least one training query on a data set;
receiving from the primary neural network a predicted test result in response to the at least one test query;
sending, based on the predicted test result, a model of a local neural network to a local machine; and
storing the model of the local neural network on the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
13. A system for providing local approximations of query results, comprising:
a processing circuitry; and
a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to:
query a primary neural network with at least one test query, wherein the test query includes a real test result derived from executing the at least one training query on a data set;
receive from the primary neural network a predicted test result in response to the test query;
send, based on the predicted test result, a model of a local neural network to a local machine; and
store the model of the local neural network on the local machine, wherein the local neural network is configured to generate a prediction in response to a user query received by the local machine.
14. The system of claim 13, wherein the system is further configured to:
train the primary neural network.
15. The system of claim 14, wherein the system is further configured to:
provide, to a plurality of input neurons of the primary neural network, a plurality of training queries and their respective real results.
16. The system of claim 15, wherein the primary neural network further includes a plurality of hidden neurons and a plurality of output neurons.
17. The system of claim 13, wherein the local machine is the user node.
18. The system of claim 13, wherein the model of the local neural network further includes a version indicator.
19. The system of claim 18, wherein the system is further configured to:
send the local machine an updated model of the local neural network in response to receiving from the local machine a version indicator of the local neural network model that is lower than a version indicator of the primary neural network.
20. The system of claim 16, wherein the primary neural network is stored on a server accessible to user nodes over a network.
21. The system of claim 20, wherein the system is further configured to:
provide the user query to the primary neural network;
generate, in response to the user query, a predicted result by the primary neural network; and
send the predicted result from the primary neural network to the user node.
US15/858,943 2017-08-14 2017-12-29 System and method for approximating query results using local and remote neural networks Pending US20190050725A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/858,943 US20190050725A1 (en) 2017-08-14 2017-12-29 System and method for approximating query results using local and remote neural networks

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762545046P 2017-08-14 2017-08-14
US201762545050P 2017-08-14 2017-08-14
US201762545058P 2017-08-14 2017-08-14
US201762545053P 2017-08-14 2017-08-14
US15/858,943 US20190050725A1 (en) 2017-08-14 2017-12-29 System and method for approximating query results using local and remote neural networks

Publications (1)

Publication Number Publication Date
US20190050725A1 true US20190050725A1 (en) 2019-02-14

Family

ID=65275439

Family Applications (4)

Application Number Title Priority Date Filing Date
US15/858,967 Active 2038-09-22 US10642835B2 (en) 2017-08-14 2017-12-29 System and method for increasing accuracy of approximating query results using neural networks
US15/858,943 Pending US20190050725A1 (en) 2017-08-14 2017-12-29 System and method for approximating query results using local and remote neural networks
US15/858,957 Active 2041-01-01 US11321320B2 (en) 2017-08-14 2017-12-29 System and method for approximating query results using neural networks
US15/858,936 Pending US20190050724A1 (en) 2017-08-14 2017-12-29 System and method for generating training sets for neural networks

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/858,967 Active 2038-09-22 US10642835B2 (en) 2017-08-14 2017-12-29 System and method for increasing accuracy of approximating query results using neural networks

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/858,957 Active 2041-01-01 US11321320B2 (en) 2017-08-14 2017-12-29 System and method for approximating query results using neural networks
US15/858,936 Pending US20190050724A1 (en) 2017-08-14 2017-12-29 System and method for generating training sets for neural networks

Country Status (3)

Country Link
US (4) US10642835B2 (en)
EP (1) EP3659044A4 (en)
WO (3) WO2019035860A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11321320B2 (en) 2017-08-14 2022-05-03 Sisense Ltd. System and method for approximating query results using neural networks

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11615285B2 (en) 2017-01-06 2023-03-28 Ecole Polytechnique Federale De Lausanne (Epfl) Generating and identifying functional subnetworks within structural networks
US11663478B2 (en) 2018-06-11 2023-05-30 Inait Sa Characterizing activity in a recurrent artificial neural network
US11893471B2 (en) 2018-06-11 2024-02-06 Inait Sa Encoding and decoding information and artificial neural networks
US20200019841A1 (en) * 2018-07-12 2020-01-16 Vmware, Inc. Neural network model for predicting usage in a hyper-converged infrastructure
US11042538B2 (en) * 2018-08-24 2021-06-22 Mastercard International Incorporated Predicting queries using neural networks
GB2580467A (en) * 2018-09-20 2020-07-22 Idera Inc Database access, monitoring, and control system and method for reacting to suspicious database activities
US11652603B2 (en) 2019-03-18 2023-05-16 Inait Sa Homomorphic encryption
US11569978B2 (en) * 2019-03-18 2023-01-31 Inait Sa Encrypting and decrypting information
CN110287233A (en) * 2019-06-18 2019-09-27 华北电力大学 A kind of system exception method for early warning based on deep learning neural network
US10740403B1 (en) 2019-08-23 2020-08-11 Capital One Services Llc Systems and methods for identifying ordered sequence data
US11481676B2 (en) * 2019-08-27 2022-10-25 Sap Se Sensitivity in supervised machine learning with experience data
US11651210B2 (en) 2019-12-11 2023-05-16 Inait Sa Interpreting and improving the processing results of recurrent neural networks
US11797827B2 (en) 2019-12-11 2023-10-24 Inait Sa Input into a neural network
US11580401B2 (en) 2019-12-11 2023-02-14 Inait Sa Distance metrics and clustering in recurrent neural networks
US11816553B2 (en) 2019-12-11 2023-11-14 Inait Sa Output from a recurrent neural network
US20230099543A1 (en) * 2020-03-06 2023-03-30 Cornell University Application-specific computer memory protection
US11294916B2 (en) * 2020-05-20 2022-04-05 Ocient Holdings LLC Facilitating query executions via multiple modes of resultant correctness
US11734354B2 (en) * 2020-11-30 2023-08-22 Mastercard International Incorporated Transparent integration of machine learning algorithms in a common language runtime environment
US11669331B2 (en) 2021-06-17 2023-06-06 International Business Machines Corporation Neural network processing assist instruction
US11675592B2 (en) 2021-06-17 2023-06-13 International Business Machines Corporation Instruction to query for model-dependent information
US11797270B2 (en) 2021-06-17 2023-10-24 International Business Machines Corporation Single function to perform multiple operations with distinct operation parameter validation
US11269632B1 (en) 2021-06-17 2022-03-08 International Business Machines Corporation Data conversion to/from selected data type with implied rounding mode
US11734013B2 (en) 2021-06-17 2023-08-22 International Business Machines Corporation Exception summary for invalid values detected during instruction execution
US11693692B2 (en) 2021-06-17 2023-07-04 International Business Machines Corporation Program event recording storage alteration processing for a neural network accelerator instruction

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180329951A1 (en) * 2017-05-11 2018-11-15 Futurewei Technologies, Inc. Estimating the number of samples satisfying the query

Family Cites Families (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5111531A (en) * 1990-01-08 1992-05-05 Automation Technology, Inc. Process control using neural network
CA2040903C (en) 1991-04-22 2003-10-07 John G. Sutherland Neural networks
US6108648A (en) 1997-07-18 2000-08-22 Informix Software, Inc. Optimizer with neural network estimator
US7181438B1 (en) 1999-07-21 2007-02-20 Alberti Anemometer, Llc Database access system
EP1417643A2 (en) 2001-01-31 2004-05-12 Prediction Dynamics Limited Neural network training
WO2004063831A2 (en) 2003-01-15 2004-07-29 Bracco Imaging S.P.A. System and method for optimization of a database for the training and testing of prediction algorithms
DE10320419A1 (en) * 2003-05-07 2004-12-09 Siemens Ag Database query system and method for computer-aided query of a database
US7831531B1 (en) 2006-06-22 2010-11-09 Google Inc. Approximate hashing functions for finding similar content
US10303999B2 (en) 2011-02-22 2019-05-28 Refinitiv Us Organization Llc Machine learning-based relationship association and related discovery and search engines
US20130117257A1 (en) 2011-11-03 2013-05-09 Microsoft Corporation Query result estimation
KR20130090147A (en) * 2012-02-03 2013-08-13 안병익 Neural network computing apparatus and system, and method thereof
CN102724308A (en) 2012-06-13 2012-10-10 腾讯科技(深圳)有限公司 Software update method and software update system
US9460401B2 (en) * 2012-08-20 2016-10-04 InsideSales.com, Inc. Using machine learning to predict behavior based on local conditions
US9280583B2 (en) 2012-11-30 2016-03-08 International Business Machines Corporation Scalable multi-query optimization for SPARQL
GB201402736D0 (en) 2013-07-26 2014-04-02 Isis Innovation Method of training a neural network
US9373057B1 (en) 2013-11-01 2016-06-21 Google Inc. Training a neural network to detect objects in images
US10311372B1 (en) 2014-12-19 2019-06-04 Amazon Technologies, Inc. Machine learning based content delivery
US11663409B2 (en) 2015-01-23 2023-05-30 Conversica, Inc. Systems and methods for training machine learning models using active learning
US10713594B2 (en) * 2015-03-20 2020-07-14 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing machine learning model training and deployment with a rollback mechanism
US9336483B1 (en) 2015-04-03 2016-05-10 Pearson Education, Inc. Dynamically updated neural network structures for content distribution networks
US11250342B2 (en) 2015-07-16 2022-02-15 SparkBeyond Ltd. Systems and methods for secondary knowledge utilization in machine learning
US20180268015A1 (en) * 2015-09-02 2018-09-20 Sasha Sugaberry Method and apparatus for locating errors in documents via database queries, similarity-based information retrieval and modeling the errors for error resolution
US10606846B2 (en) 2015-10-16 2020-03-31 Baidu Usa Llc Systems and methods for human inspired simple question answering (HISQA)
WO2017176356A2 (en) 2016-02-11 2017-10-12 William Marsh Rice University Partitioned machine learning architecture
US10878334B2 (en) 2016-03-17 2020-12-29 Veda Data Solutions, Inc. Performing regression analysis on personal data records
US10489393B1 (en) * 2016-03-30 2019-11-26 Amazon Technologies, Inc. Quasi-semantic question answering
US10706354B2 (en) 2016-05-06 2020-07-07 International Business Machines Corporation Estimating cardinality selectivity utilizing artificial neural networks
CN106021364B (en) * 2016-05-10 2017-12-12 百度在线网络技术(北京)有限公司 Foundation, image searching method and the device of picture searching dependency prediction model
US10281885B1 (en) * 2016-05-20 2019-05-07 Google Llc Recurrent neural networks for online sequence generation
WO2017212459A1 (en) 2016-06-09 2017-12-14 Sentient Technologies (Barbados) Limited Content embedding using deep metric learning algorithms
US20180341862A1 (en) 2016-07-17 2018-11-29 Gsi Technology Inc. Integrating a memory layer in a neural network for one-shot learning
WO2018015848A2 (en) 2016-07-17 2018-01-25 Gsi Technology Inc. Finding k extreme values in constant processing time
US20180032902A1 (en) * 2016-07-27 2018-02-01 Ford Global Technologies, Llc Generating Training Data For A Conversational Query Response System
US11115463B2 (en) 2016-08-17 2021-09-07 Microsoft Technology Licensing, Llc Remote and local predictions
CN106547828B (en) * 2016-09-30 2019-12-06 南京途牛科技有限公司 database caching system and method based on neural network
JP6929539B2 (en) * 2016-10-07 2021-09-01 国立研究開発法人情報通信研究機構 Non-factoid question answering system and method and computer program for it
US11093813B2 (en) * 2016-10-20 2021-08-17 Google Llc Answer to question neural networks
US10699215B2 (en) * 2016-11-16 2020-06-30 International Business Machines Corporation Self-training of question answering system using question profiles
US10324993B2 (en) 2016-12-05 2019-06-18 Google Llc Predicting a search engine ranking signal value
CN108509463B (en) * 2017-02-28 2022-03-29 华为技术有限公司 Question response method and device
EP3563302A1 (en) * 2017-04-20 2019-11-06 Google LLC Processing sequential data using recurrent neural networks
WO2018200979A1 (en) 2017-04-29 2018-11-01 Google Llc Generating query variants using a trained generative model
US11640436B2 (en) * 2017-05-15 2023-05-02 Ebay Inc. Methods and systems for query segmentation
CA3066227A1 (en) 2017-06-05 2018-12-13 Peng Jiang Customized coordinate ascent for ranking data records
US10268646B2 (en) * 2017-06-06 2019-04-23 Facebook, Inc. Tensor-based deep relevance model for search on online social networks
US20180357240A1 (en) 2017-06-08 2018-12-13 Facebook, Inc. Key-Value Memory Networks
US10482394B2 (en) * 2017-06-13 2019-11-19 Google Llc Large-scale in-database machine learning with pure SQL
US10255273B2 (en) * 2017-06-15 2019-04-09 Microsoft Technology Licensing, Llc Method and system for ranking and summarizing natural language passages
CN107220380A (en) * 2017-06-27 2017-09-29 北京百度网讯科技有限公司 Question and answer based on artificial intelligence recommend method, device and computer equipment
US11256985B2 (en) 2017-08-14 2022-02-22 Sisense Ltd. System and method for generating training sets for neural networks
EP3659044A4 (en) 2017-08-14 2020-09-16 Sisense Ltd. System and method for increasing accuracy of approximating query results using neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180329951A1 (en) * 2017-05-11 2018-11-15 Futurewei Technologies, Inc. Estimating the number of samples satisfying the query

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
G Hinton et al. Distilling the Knowledge in a Neural Network. arXiv. 9 March 2015. <URL: https://arxiv.org/pdf/1503.02531.pdf> (Year: 2015) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11321320B2 (en) 2017-08-14 2022-05-03 Sisense Ltd. System and method for approximating query results using neural networks

Also Published As

Publication number Publication date
WO2019035861A1 (en) 2019-02-21
EP3659044A1 (en) 2020-06-03
WO2019035862A1 (en) 2019-02-21
EP3659044A4 (en) 2020-09-16
US20190050724A1 (en) 2019-02-14
US20190050726A1 (en) 2019-02-14
US11321320B2 (en) 2022-05-03
US10642835B2 (en) 2020-05-05
US20190050454A1 (en) 2019-02-14
WO2019035860A1 (en) 2019-02-21

Similar Documents

Publication Publication Date Title
US11321320B2 (en) System and method for approximating query results using neural networks
US20220075670A1 (en) Systems and methods for replacing sensitive data
US11256985B2 (en) System and method for generating training sets for neural networks
US10445170B1 (en) Data lineage identification and change impact prediction in a distributed computing environment
US20180046918A1 (en) Aggregate Features For Machine Learning
He et al. Parallel extreme learning machine for regression based on MapReduce
EP3591586A1 (en) Data model generation using generative adversarial networks and fully automated machine learning system which generates and optimizes solutions given a dataset and a desired outcome
US20210097343A1 (en) Method and apparatus for managing artificial intelligence systems
US20230139783A1 (en) Schema-adaptable data enrichment and retrieval
US11763203B2 (en) Methods and arrangements to adjust communications
US11562252B2 (en) Systems and methods for expanding data classification using synthetic data generation in machine learning models
CN113795853A (en) Meta-learning based automatic feature subset selection
US11645500B2 (en) Method and system for enhancing training data and improving performance for neural network models
US11874798B2 (en) Smart dataset collection system
WO2023224707A1 (en) Anomaly score normalisation based on extreme value theory
Abedini et al. Epci: an embedding method for post-correction of inconsistency in the RDF knowledge bases
US20240126798A1 (en) Profile-enriched explanations of data-driven models
US20230061914A1 (en) Rule based machine learning for precise fraud detection
Shen et al. Federated Learning Integrated CNN-Trans DDoS Attack Detection Model
CN111723074A (en) Method and device for generating query data model system

Legal Events

Date Code Title Description
AS Assignment

Owner name: SISENSE LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEVY YURISTA, GUY;AZARIA, ADI;ORAD, AMIR;AND OTHERS;SIGNING DATES FROM 20171231 TO 20180102;REEL/FRAME:044600/0532

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: SILICON VALLEY BANK, AS AGENT, NEW YORK

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:SISENSE LTD;REEL/FRAME:052267/0325

Effective date: 20200330

Owner name: SILICON VALLEY BANK, NEW YORK

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:SISENSE LTD;REEL/FRAME:052267/0313

Effective date: 20200330

AS Assignment

Owner name: SISENSE LTD, ISRAEL

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK, AS AGENT;REEL/FRAME:057594/0926

Effective date: 20210923

Owner name: SISENSE LTD, ISRAEL

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:057594/0867

Effective date: 20210923

Owner name: COMERICA BANK, MICHIGAN

Free format text: SECURITY INTEREST;ASSIGNOR:SISENSE LTD.;REEL/FRAME:057588/0698

Effective date: 20210923

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: SISENSE LTD., ISRAEL

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:063915/0257

Effective date: 20230608

AS Assignment

Owner name: HERCULES CAPITAL, INC., CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNORS:SISENSE LTD;SISENSE SF INC.;REEL/FRAME:063948/0662

Effective date: 20230608

AS Assignment

Owner name: SISENSE LTD., ISRAEL

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:TRIPLEPOINT VENTURE GROWTH BDC CORP;REEL/FRAME:063980/0047

Effective date: 20230609

Owner name: SISENSE SF, INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:TRIPLEPOINT VENTURE GROWTH BDC CORP;REEL/FRAME:063980/0047

Effective date: 20230609

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: EX PARTE QUAYLE ACTION MAILED