WO2019140652A1 - Facilitating detection of data errors using existing data - Google Patents
Facilitating detection of data errors using existing data Download PDFInfo
- Publication number
- WO2019140652A1 WO2019140652A1 PCT/CN2018/073495 CN2018073495W WO2019140652A1 WO 2019140652 A1 WO2019140652 A1 WO 2019140652A1 CN 2018073495 W CN2018073495 W CN 2018073495W WO 2019140652 A1 WO2019140652 A1 WO 2019140652A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- generalization
- patterns
- pair
- compatibility
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1479—Generic software techniques for error detection or fault masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1012—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using codes or arrangements adapted for a specific type of error
- G06F11/1024—Identification of the type of error
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0751—Error or fault detection not based on redundancy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1048—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using arrangements adapted for a specific error detection or correction feature
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/81—Threshold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
Definitions
- Data analysts oftentimes desire to identify data errors within a set of data values. For example, data may be collected in a format or variation that is not compatible with the other data. To effectively analyze or consume the data, however, the collected data may be desired to be compatible with one another. For example, compatible data may be desired to effectively perform table searching, data querying, etc. Identifying errors within data, however, is often difficult and error prone. For example, conventional implementations that use regular expression patterns to detect inconsistent values can be error-prone as such techniques make local decisions based only on values in a given input column.
- Various aspects of the technology described herein are generally directed to systems, methods, and computer storage media for, among other things, facilitating detection of data errors using existing data external to the data set (e.g., column) being analyzed.
- existing data external to the data set (e.g., column) being analyzed.
- a large corpus of existing data can be utilized to detect co-occurrence statistics.
- Such statistics can be leveraged to detect errors or incompatibility within a data set, such as a single column of data.
- FIG. 1 is a block diagram of an exemplary system for facilitating data error detection, suitable for use in implementing aspects of the technology described herein;
- FIG. 2 is an exemplary graphical user interface associated with detected data error, in accordance with aspects of the technology described herein;
- FIG. 3 is an example error detection engine in accordance with aspects of the technology described herein;
- FIG. 4 is an example of a hierarchical generalization tree, in accordance with aspects of the technology described herein;
- FIG. 5 provides an example method for facilitating detection of data errors, in accordance with aspects of the technology described herein;
- FIG. 6 provides an example method for generating a compatibility index, in accordance with aspects of the technology described herein;
- FIG. 7 is a block diagram of an exemplary computing environment suitable for use in implementing aspects of the technology described herein.
- types of data collected can include personal information (e.g., name, phone number, address, email address) , computer data (e.g., IP address, MAC address) , transaction data (e.g., date, time, credit card number, ISBN number) , health-related data (e.g., DEA number, drug name) , etc.
- the data is not collected in a common format.
- date information may be collected, but may not be collected in the same format (e.g., July 4, 2018; 7/4/2018; 07.04.18, etc. ) .
- Erroneous data, inconsistent data, or incompatible data can present challenges for downstream queries and/or programs.
- a downstream query or program that produces an aggregate result with a group-by “month” may assume dot-separated data formats, which would extract months by splitting using ‘.’ and taking the second component.
- Inconsistent date formats e.g., date formats that are not dot-separated
- included in the data set e.g., column of data
- embodiments of the present disclosure are directed to facilitating automated detection of data error using existing data.
- an extensive corpus of existing data can be analyzed to statistically detect incompatibility indicating data error.
- Such statistical analysis, or portion thereof can be stored in an index.
- an index may include incompatibility indicators that indicate a level or extent of compatibility between data values and/or patterns representing data values.
- target data associated therewith can be analyzed to identify any errors.
- an error detection engine can analyze the set of target data and utilize a compatibility index to detect erroneous data.
- values within the target data can be generalized to patterns using a generalization language (s) .
- a pattern can then be used in association with an index to identify erroneous data.
- a pattern pair representing a value pair in the target data can be used to reference a compatibility indicator in an index that corresponds with a matching pattern pair.
- a compatibility indicator can indicate an extent or measure of compatibility.
- FIG. 1 a block diagram of an exemplary network environment 100 suitable for use in implementing embodiments of the invention is shown.
- the system 100 illustrates an environment suitable for facilitating detection of data errors (e.g., erroneous or incompatible data types) by, among other things, using existing data, such as existing data tables.
- the network environment 100 includes a user device 110, an error detection engine 112, a data store 114, and data sources 116a-116n (referred to generally as data source (s) 116) .
- the user device 110, the error detection engine 112, the data store 114, and the data sources 116a-116n can communicate through a network 118, which may include any number of networks such as, for example, a local area network (LAN) , a wide area network (WAN) , the Internet, a cellular network, a peer-to-peer (P2P) network, a mobile network, or a combination of networks.
- LAN local area network
- WAN wide area network
- P2P peer-to-peer
- the network environment 100 shown in FIG. 1 is an example of one suitable network environment and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the inventions disclosed throughout this document. Neither should the exemplary network environment 100 be interpreted as having any dependency or requirement related to any single component or combination of components illustrated therein.
- the user device 110 and data sources 116a-116n may be in communication with the error detection engine 112 via a mobile network or the Internet, and the error detection engine 112 may be in communication with data store 114 via a local area network.
- the environment 100 is illustrated with a network, one or more of the components may directly communicate with one another, for example, via HDMI (high-definition multimedia interface) , DVI (digital visual interface) , etc.
- one or more components may be integrated with one another, for example, at least a portion of the error detection engine 112 and/or data store 114 may be integrated with the user device 110.
- a portion of the error detection engine 312 configured to generate an index e.g., index manager 316 of FIG. 3
- the user device may be configured to perform error detection (e.g., via error detection manager 314 of FIG. 3) .
- the user device 110 can be any kind of computing device capable of facilitating detection of data errors.
- the user device 110 can be a computing device such as computing device 700, as described above with reference to FIG. 7.
- the user device 110 can be a personal computer (PC) , a laptop computer, a workstation, a mobile computing device, a PDA, a cell phone, or the like.
- the user device can include one or more processors, and one or more computer-readable media.
- the computer-readable media may include computer-readable instructions executable by the one or more processors.
- the instructions may be embodied by one or more applications, such as application 120 shown in FIG. 1.
- the application (s) may generally be any application capable of facilitating a data error detection.
- the application (s) comprises a web application, which can run in a web browser, and could be hosted at least partially server-side.
- the application (s) can comprise a dedicated application.
- the application is integrated into the operating system (e.g., as a service) .
- data error detections may be initiated and/or presented via an application 120 operating on the user device 110.
- the user device 110 via an application 120, might allow a user to initiate a data error detection and to obtain, in response to initiating a data error detection, an indication of one or more data values that may be erroneous.
- the user device 110 can include any type of application that facilitates data error detection.
- An application may be a stand-alone application, a mobile application, a web application, or the like.
- One exemplary application that may be used for detecting data errors, or data suggestions associated therewith, includes a spreadsheet application.
- the functionality described herein may be integrated directly with an application or may be an add-on, or plug-in, to an application.
- User device 110 can be a client device on a client-side of operating environment 100, while error detection engine 112 can be on a server-side of operating environment 100.
- Error detection engine 112 may comprise server-side software designed to work in conjunction with client-side software on user device 110 so as to implement any combination of the features and functionalities discussed in the present disclosure.
- An example of such client-side software is application 120 on user device 110.
- This division of operating environment 100 is provided to illustrate one example of a suitable environment, and it is noted there is no requirement for each implementation that any combination of error detection engine 112 and user device 110 to remain as separate entities.
- the user device 110 is separate and distinct from the error detection engine 112, the data store 114, and the data sources 116 illustrated in FIG. 1.
- the user device 110 is integrated with one or more illustrated components.
- the user device 110 may incorporate functionality described in relation to the error detection engine 112, such as error detection manager 314.
- the error detection engine 112 such as error detection manager 314.
- data error detection refers detection of an error in data, particularly related to an incompatible type or format of data. Error detection is oftentimes desired as quality or compatible data is generally desired within a set of data (e.g., column of data, etc. ) .
- inconsistencies in data, or incompatible data can result in challenges for downstream queries and programs, which often make implicit assumptions on how data should look.
- a downstream program or query that produces an aggregate result with a group-by on month may assume dot-separated date formats, which would extract months by splitting using “.” and taking a second component in the value.
- Such utilization may lead to unexpected errors or even corruption of downstream results.
- Identification of such error detection may be initiated at the user device 110 in any manner. For instance, upon selection of a set of data (e.g., a column of data) , a “begin” or “search” function button might be selected, for example, by a user via the user interface. By way of example only, a user might select to search for erroneous or incompatible data within the set of data. As another example, identification of erroneous or incompatible data might be automatically initiated.
- a set of data e.g., a column of data
- a set of data for which error detection is applied can be selected in any number of ways. For instance, a user might use a mouse, selector, touch input, or the like to specify a column of data. As another example, a set of data might be automatically selected. By way of example only, assume a table includes several columns of data. In such a case, the values in a first column may be selected to detect erroneous data.
- a set of data values can be provided as, or as part of, an error detection query to initiate an error detection process.
- a set of data values might be included as an error detection query to result in one or more indications of incompatible data. For example, upon selecting a set of data as well as a “begin” or “go” button or icon, the selected data can be provided to the error detection engine 112 for use in detecting erroneous data.
- the user device 110 communicates with the error detection engine 112 to facilitate identification of erroneous or incompatible data.
- a user utilizes the user device 110 to initiate a search for erroneous errors via the network 118.
- the network 118 might be the Internet, and the user device 110 interacts with the error detection engine 112 to obtain indications of predicted data errors, or data suggestions thereof.
- the network 118 might be an enterprise network associated with an organization. It should be apparent to those having skill in the relevant arts that any number of other implementation scenarios may be possible as well.
- the error detection engine 112 generally provides indications of predicted data errors. Generally, the error detection engine 112 analyzes a set of data to identify potential data errors. The detected data errors can be provided to the user device 110 and/or used to correct data or provide suggestions related thereto.
- the error detection engine 112, according to embodiments, can be implemented as server systems, program modules, virtual machines, components of a server or servers, networks, and the like.
- the error detection engine 112 receives error detection queries initiated via the user device 210.
- Error detection queries received from a user device can include error detection queries that were manually or explicitly input by the user (input queries) as well as error detection queries that were automatically generated.
- an error detection query might be specified by a user based on the user selecting a set of data, such as a column of data.
- Error detection queries can additionally or alternatively be automatically generated and received at the error detection engine 112. For instance, upon detecting a new column in a table having one or more data values, an error detection query might be automatically triggered.
- the error detection engine 112 can receive error detection queries from any number of devices.
- the error detection engine 112 can analyze the data to identify any errors. As described, to detect errors, the error detection engine 112 may analyze a set of data and utilize a compatibility index to detect erroneous data. In particular, values within a set of data can be generalized to patterns using a generalization language (s) . Such a pattern can then be used in association with an index to identify erroneous data.
- s generalization language
- FIG. 2 illustrates an example user interface 200 associated with a data error notification.
- column 210 represents various date values. As shown, dates are generally provided in a four-digit year format. However, value 212 includes a month, day and year format (June 11, 2010) .
- a data error notification 214 can be provided to indicate a potential erroneous data format.
- Such a data error detection notification 214 can be represented in any manner.
- the data error detection notification may include a suggested data transformation, a request to remove the data, or the like. This is only one example of potential user interface aspects of embodiments of the present invention and is not intended to limit the scope of the invention.
- FIG. 3 illustrates an example error detection engine 312.
- the error detection engine 312 includes an error detection manager 314 and an index manager 316.
- the error detection engine 312 can include any number of other components not illustrated.
- one or more of the illustrated components 314 and 316 can be integrated into a single component or can be divided into a number of different components.
- Components 314 and 316 can be implemented on any number of machines and can be integrated, as desired, with any number of other functionalities or services.
- index manager 316 may operate at a server, while error detection manager 314, or aspects thereof, may operate at a user device.
- the error detection engine 312 can communicate with the data repository 318.
- the data repository 318 is configured to store various types of information used by the error detection engine 312.
- the error detection engine 312 provides data to the data repository 318 for storage, which may be retrieved or referenced by the error detection engine 312.
- Examples of types of information stored in data repository 318 may include, for example, data tables, data columns, generalization languages, patterns, compatibility indicators, or the like.
- the error detection manager 314 is generally configured to facilitate error detection within a data set, such as a target data set. As shown in FIG. 3, the error detection manager 314 may include a pattern generator 320 and an error detector 322. In implementation, the error detection manager 314 can receive as input a target data set 302 for which error detection is desired. As previously described, such a target data set can be selected by a user via a user device or automatically selected. By way of example only, a user may select a column of data as target data for which error detection is desired. As another example, upon launching a spreadsheet or document having a table, one or more target data sets may be automatically selected and provided as input for error detection.
- a pattern generator 320 can generate patterns in association with the target data set.
- a pattern generally refers to a generalized representation of a value. Patterns can be generated for any number of data values in the target data set. For example, in some cases, a pattern might be generated for each data value in the target data set.
- a pattern can be generated for a data value in accordance with any number of generalization languages.
- a generalization language generally refers to method for mapping characters or sets of characters to generate a pattern.
- any number of generalization languages may be utilized to generate patterns.
- a pattern for a value may be generated for each of a set of generalization languages.
- a first generation language may be used to generate a first pattern
- a second generation language may be used to generate a second pattern.
- the generalization language or set of generalization languages to utilize for generating patterns may be selected in any number of ways.
- a predetermined set of generalization languages might be utilized to generate patterns for a value.
- a first and second generalization language might be used.
- a first and second generalization language might be used, while for another type of data value, a third and fourth generalization language might be used.
- a particular set of generalization languages to use to generate patterns can be selected, determined, or identified based on data compatibility associated with training data.
- FIG. 4 provides one example of a hierarchical generalization tree.
- a tree H represents a generalization tree defined over an alphabet ⁇ , if each of its leaf nodes corresponds to a character ⁇ ⁇ ⁇ , and each of its intermediate nodes represents the union of all characters in its children nodes. While only one canonical generalization tree is shown in FIG. 4, there are a variety of ways to generalize a given value v using one generalization tree as different characters can be generalized into different combinations of internal tree nodes.
- Each distinct generalization can be identified or designated as a generalization language that maps each character to a tree node.
- L 1 and L 2 are used to generate a pattern. As shown in FIG. 4, assume L 1 corresponds with the first level of the hierarchy, and L 2 corresponds with the second level of the hierarchy.
- Such generalization languages can be represented as:
- various generalization languages may be desirable for utilizing to generate patterns as the resulting patterns can provide varying coverage of compatibility. For instance, with some types of data values, one generalization language may result in a pattern that detects incompatibility of data, while another generalization language may result in a pattern that detects incompatibility of data. In some cases, multiple generalization languages may be complementary in their coverage such that a set of generalization languages may be desired.
- a compatibility indicator associated with the two patterns for the first generalization language L 1 may indicate that the two patterns L 1 (v 1 ) and L 1 (v 2 ) rarely co-occur in a column and, as such, are incompatible.
- the two patterns for the second generalization language L 2 are indistinguishable, making the second generalization language L 2 ineffective to detect incompatibility between the two patterns.
- the second generalization language L2 is more effective in detecting incompatibility between the two patterns.
- An error detector 322 is generally configured to detect error or incompatibility within a target data set utilizing generated patterns.
- the error detector 322 can access a compatibility index that indicates data compatibility and identify whether corresponding data patterns are indicated as compatible or incompatible. To do so, the error detector 322 may generate pairs of patterns for determining compatibility. In such a case, any number of pattern pairs or data pairs may be generated. For example, in some embodiments, pattern pairs can be generated for each combination of values and/or patterns in the target data set. Although described herein as the error detector 322 generating pattern pairs, pattern pairs can be generated by another component, such as, for example by a pattern generator prior to generating patterns or following pattern generation.
- pattern pairs can be generated for each generalization language used to generate patterns for data values.
- a data value pair includes a first value and a second value.
- a first generalization language can be used to generate a first pattern for the first value and a second pattern for the second value.
- a second generalization language can be used to generate a third pattern for the first value and a fourth pattern for the second value.
- a first pattern pair associated with the first generalization language may be generated, and a second pattern pair associated with the second generalization language may be generated.
- the pattern pairs can be used to lookup or identify a corresponding training pattern pair. For instance, assume a pattern pair is ⁇ P 1 , P 2 >. In such a case, a compatibility index may be referenced and used to identify a matching training pattern pair ⁇ P 1 , P 2 > included therein. As can be appreciated, any number of methods can be used to identify and/or lookup a matching pattern pair.
- a training pattern pair associated with a same generalization language as the target pattern pair may be identified. That is, assume a pattern pair associated with a first generalization language is generated from the target data set. In such a case, a training pattern pair corresponding with the same first generalization language may be searched for in the compatibility index.
- a compatibility indicator associated therewith can be identified.
- a compatibility indicator provides an indication of compatibility and/or incapability between two patterns.
- a compatibility indicator may indicate a likelihood or frequency of the existence of two patterns in a data set, such as a column.
- a compatibility that exceeds a threshold may indicate that two patterns are compatible with one another, and a compatibility that is lower than the threshold can indicate that the two patterns are incompatible with one another.
- a threshold may be a 0 value, such that compatibility scores above 0 indicate compatibility between patterns, while compatibility scores below 0 indicate incompatibility between patterns.
- two patterns that are indicated as incompatible with one another can indicate a data error.
- a compatibility indicator can be determined in any number of ways for training pattern pairs.
- a compatibility indicator may be determined using a statistical measure referred to as point-wise mutual information, or PMI.
- PMI point-wise mutual information
- a pair-wise NPMI score (s k ) can be determined for a pattern pair for a language:
- PMI and/or NMPI as described herein can additionally or alternatively be determined in association with pattern pairs in a similar manner.
- c (v)
- c (v 1 , v 2 )
- the probability of seeing the value v in a column can be defined as and the probability of seeing both v 1 and v 2 in the same column can be defined as PMI can then be defined as:
- multiple pattern pairs may be generated for a pair of values based on utilization of multiple generalization languages.
- two data values “2011-01-01” and “2011.01.02” are converted to a first pattern pair via a first generalization language and converted to a second pattern pair via a second generalization language.
- the compatibility index is searched for both the first pattern pair in association with the first generalization language and the second pattern pair in association with the second generalization language.
- a first compatibility indicator and a second compatibility indicator can be identified for the two data values “2011-01-01” and “2011.01.02. ”
- Such varying compatibility indicators can be analyzed in any number of ways to identify a final compatibility indicator for the data value pair.
- the various compatibility indicators may be aggregated, for example, by determining an average compatibility indicator score.
- an average compatibility indicator may not be optimal as different languages generalize values differently. For example, the value pair “2011-01-01” and “2011.01.02” might only be detected using a first generalization language, while another value pair, such as “2014-01” and “July-01” might only be detected using a second generalization language.
- one approach is to use each language, but predict a pair of values as incompatible when at least one language is confident (producing a low s k (v i , v j ) score) , and ignore languages that are not confident (with high NPMI scores) , because each generalization language may result in values that are difficult to differentiate. For instance, for a set of languages, if one language predicts two values are not compatible (e.g., less than a threshold) , overall the two values are predicted as incompatible, regardless of predictions produced in association with other languages.
- generalization languages e.g., L 1 and L 2
- an indication of the compatibility between the two patterns or values associated therewith can be stored or provided to another component, such as a server or user device.
- a user may view the compatibility indicator (s) and decide whether the data is indeed incompatible or erroneous.
- the indication of compatibility can be used to generate an indication of an erroneous value, a data modification and/or a data removal.
- a determination of a specific erroneous value may be made as well as a recommendation to remove the erroneous value or a recommendation for a data correction (e.g., transform the data value into a different format) may be provided.
- a data correction e.g., transform the data value into a different format
- a compatibility index is referenced and utilized to detect data error, or data incompatibility.
- the index manager 316 is configured to generate and manage the compatibility index.
- a compatibility index generally refers to an index or data structure that includes compatibility indicators indicating compatibility between two patterns and/or values (also referred to herein as pattern pairs and value pairs) .
- compatibility indicators within an index are generally generated based on compatibility of patterns and/or values in historical data, that is, existing data. In this manner, existing data from various data sources (e.g., external data sources, web data sources, etc. ) can be analyzed to identify whether such data is compatible with one another.
- incompatibility or error detection can be based on a more global collection of data as opposed to restricting error detection to other data included in the data set being analyzed.
- index manager 316 may include a data trainer 330, a pattern generator 332, a compatibility identifier 334, a pattern selector 336, and an index generator 338.
- a training corpus can be generated.
- a data trainer 330 is generally configured to generate a training data corpus.
- the data trainer 330 may initially obtain or access existing data, for example, via the Internet and/or within an Enterprise. For instance, a corpus with over 100 million web tables can be extracted from a web page index of a search engine. As data error detection is generally described herein as being detected within a single data set, such as a single column, tables can be decomposed into individual data sets, or columns.
- sets of data such as columns, having values that are verified to be statistically compatible can be selected.
- a set of columns C+ can be selected having values that are verified to be statistically compatible.
- the initial data can be analyzed to remove data sets (e.g., columns of data) that do not have statistically compatible data.
- NPMI scores can be determined and used to verify statistical compatibility.
- co-occurrence and PMI or NMPI scores can be calculated for all data pair variations. In this way, data within existing data sets can be verified as compatible to one another.
- Training examples generally refer to pairs of data values that include compatible data or incompatible data.
- pairs of data values within a single data set e.g., column
- value pairs may include (A, B) , (A, C) , and (B, C) . Any number of pairs of data values from within a data set can be utilized to generate compatible pairs of data.
- a value within a data set can be mixed with values in another data set (e.g., column) to produce a synthetic data set (e.g., column) .
- the synthetic data set will include a sole value that is incompatible with the other values in the data set.
- the incompatible value can be paired with each of the other values to generate incompatible value pairs.
- such incompatibility can be verified for example, by comparing the implanted value with the other values.
- a set of compatible pairs of data and a set of incompatible pairs of data are generated as a training set of data included in the training data corpus.
- the pattern generator 332 is generally configured to generate patterns in association with data values. In this regard, upon obtaining a training data corpus having compatible value pairs and incompatible value pairs, the pattern generator 332 can generate patterns for the value pairs. The pattern generator 332 can generate patterns for each value in association with any number of generalization languages.
- any number of generalization languages may be utilized to generate patterns.
- a pattern for a value may be generated for each of a set of generalization languages.
- a first generation language may be used to generate a first pattern pair (that corresponds with the value pair)
- a second generation language may be used to generate a second pattern pair (that corresponds with the value pair) .
- the generalization language or set of generalization languages to utilize for generating patterns may be selected in any number of ways.
- a predetermined set of generalization languages might be utilized to generate patterns for a value.
- each generalization language e.g., in a generalization tree
- Generalization languages that might be used to generate patterns may be represented and/or identified via a generalization tree.
- L 1 and L 2 are used to generate a pattern. As shown in FIG. 4, assume L 1 corresponds with the first level of the hierarchy, and L 2 corresponds with the second level of the hierarchy.
- Such generalization languages can be represented as:
- various generalization languages may be desirable for utilizing to generate patterns as the resulting patterns can provide varying coverage of compatibility. For instance, with some types of data values, one generalization language may result in a pattern that detects incompatibility of data, while another generalization language may result in a pattern that detects incompatibility of data. In some cases, multiple generalization languages may be complementary in their coverage such that a set of generalization languages may be desired.
- a compatibility indicator associated with the two patterns for the first generalization language L 1 may indicate that the two patterns L 1 (v 1 ) and L 1 (v 2 ) rarely co-occur in a column and, as such, are incompatible.
- the two patterns for the second generalization language L 2 are indistinguishable, making the second generalization language L 2 ineffective to detect incompatibility between the two patterns.
- the second generalization language L 2 is more effective in detecting incompatibility between the two patterns.
- patterns can be identified for values and then joined into pairs.
- the compatibility identifier 334 is generally configured to identify compatibility between pattern pairs and/or value pairs. In this regard, for a pattern pair generated in association with a particular generalization language, the compatibility identifier 334 can identify compatibility between the two patterns, or values associated therewith.
- a compatibility associated therewith can be identified.
- a compatibility indicator provides an indication of compatibility and/or incompatibility between two patterns and/or corresponding values.
- a compatibility that exceeds a threshold may indicate that two patterns and/or values are compatible with one another, and a compatibility score that is lower than the threshold can indicate that the two patterns and/or values are incompatible with one another.
- a threshold may be a 0 value, such that compatibility scores above 0 indicate compatibility between patterns and/or corresponding values, while compatibility scores below 0 indicate incompatibility between the patterns and/or corresponding values.
- two patterns and/or values that are indicated as incompatible with one another can indicate a data error.
- a compatibility score or indicator can be generated in any number of ways.
- a compatibility indicator may be determined using a statistical measure referred to as point-wise mutual information, or PMI.
- PMI point-wise mutual information
- a pair-wise NPMI score (s k ) can be determined for a pattern pair for a language:
- PMI and/or NMPI as described herein can additionally or alternatively be determined in association with pattern pairs.
- c (v)
- c (v 1 , v 2 )
- the probability of seeing the value v in a column can be defined as and the probability of seeing both v 1 and v 2 in the same column can be defined as PMI can then be defined as:
- NPMI as the compatibility of two patterns L (v 1 ) and L (v 2 ) is reliable, particularly when enough data exists with large occurrence count of c (L (v 1 ) ) and c (L (v 2 ) ) .
- c (L (v 1 ) , c (L (v 2 ) ) and c (L (v 1 ) , L (v 2 ) ) all ⁇ 0.
- NPMI scores might fluctuate substantially with small changes of c (L (v 1 ) , L (v 2 ) ) .
- co-occurrence counts can be smoothed out using a technique known as smoothing.
- Jelinek-Mercer smoothing can be utilized. Jelinek-Mercer computes a weighted sum of the observed c (L (v 1 ) , L (v 2 ) ) and its expectation assuming independence where N is the total number of columns.
- f is the smoothing factor between 0 and 1.
- CM count-min
- CM sketches maintain a two dimensional array M with w columns and d rows (where wd is substantially smaller than the total number of items for space reduction) .
- Each row i ⁇ [w] is associated with a hash function hi from a family of pairwise independent H.
- a key-value pair (k, v) arrives, the entry at row i, column position h i (k) , written as M [i, h i (k) ] , can be incremented by v, for all row i ⁇ [w] .
- CM sketches to compress co-occurrence can reduce memory sizes used by a generalization language, often by orders of magnitude (e.g., from 4GB to 40MB) , without much impact on counting accuracy or precision/recall loss in error detection.
- a pattern selector 336 is generally configured to select a set of generalization languages for which to include in the compatibility index. As described, different generalization languages can have different advantages for detecting different types of incompatibility. One candidate language is to encode everything at the leaf level, which amounts to no generalization. Such a language is more sensitive in detecting issues, but can also lead to false-positives (e.g., detecting “1918-01-01” and “2018-12-31” as incompatible) due to data sparsity. On the other hand, generalizing everything to the root can result in a pattern that is too insensitive to detect any issues. As such, the pattern selector 336 can be configured to select generalization languages with an appropriate balance in the hierarchy (which is generally determined based on the amount of training corpus--the sparser the data, the more need to generalize) .
- generalization languages can be advantageously selected due to space capacity as different languages require different amounts of space.
- the most detailed generalization language at the leaf level for example, can require over 100GB of memory for co-occurrence statistics, and with more generalizations higher up in the hierarchy, the less space is required.
- error detection is an interactive process on user devices, and as such, the co-occurrence statistics may be memory-resident.
- 452310333 (6 ⁇ 1051) possible generalization languages may result.
- restrictions can be imposed to require classes of characters like [A-Z] to generalize to the same level, 144 candidate languages can still exist.
- the pattern selector 336 can analyze data and select a best subset of languages from a set of all languages to use for error detection. In this way, a smaller amount of data can be stored in the compatibility index.
- One method for selecting a subset of languages includes utilizing dynamic-threshold (DT) aggregation.
- DT dynamic-threshold
- DT dynamic-threshold
- Another method for selecting a subset of languages includes utilizing static-threshold (DT) aggregation. Instead of allowing each generalization language L k ⁇ L′ to pick a separate threshold, while optimizing the union of the predictions in L′ to maximize recall while maintaining a precision P, each language L k ⁇ L′ can be required to be of at least precision P on T. This is equivalent to finding a such that:
- Table 1 shows an example T, where (compatible examples) and (incompatible examples) .
- Each t i corresponds to a pair of cell values, defined as
- Table 1 Generated training examples, where Scores are provided based on NPMI from generalization using L j .
- the optimization question becomes select a subset to maximize the coverage of incompatibility cases in T.
- a subset of languages can be selected where each L k has a precision requirement of P, such that the union can detect as many single-column compatibility errors as possible on the training set T, subject to a memory budget of M.
- a subset of languages can be selected by:
- a greedy approach may be used to iteratively find a generalization language from a candidate set of generalization languages L C .
- One such algorithm is provided as follows:
- the first portion of this algorithm (lines 2-7) iteratively find a language L from the candidate set, whose addition into the current selected set of candidate language G will result in the largest incremental gain, defined as the coverage of new incompatibility cases divided by language size, which can be written as:
- the candidate set can be iteratively expanded until no further generalization language candidates can be found without violating memory constraints. Additionally, a best single language can be computed: The coverage of L k and G can be compared, and used to select the better option as the selected L′ (line 9-12) .
- the index generator 338 can generate a compatibility index for subsequent use in detecting data errors.
- a compatibility index may include various types of data.
- a compatibility index includes a set of pattern pairs and corresponding compatibility indicators.
- pattern pairs generated in association with each selected generalization language can be included in such a compatibility index, or set of indices. For example, assume a first generalization language and second generalization language are identified as optimally being used to detect compatibility. In such a case, pattern pairs and corresponding compatibility indicators associated with the first generalization language and the second generalization language can be included in the compatibility index.
- FIGS. 5-6 provide methods of facilitating data error detection, in accordance with embodiments described herein.
- the methods 500 and 600 can be performed by a computer device, such as device 700 described below.
- the flow diagrams represented in FIGS. 5-6 are intended to be exemplary in nature and not limiting.
- method 500 is directed to facilitating data error detection, in accordance with embodiments of the present invention.
- a target data set for which to identify incompatible data is obtained.
- a target data set may be selected by a user or automatically selected.
- patterns that represent the data values in the target data set are generated. In embodiments, such patterns can be generated in accordance with any number of generalization languages.
- pattern pairs are generated. Each pattern pair includes a pair of patterns that represent a pair of values within the target data set. In some embodiments, pattern pairs can be generated for each combination of values in the data set.
- a compatibility index is referenced.
- a matching pattern pair is searched for in the compatibility index.
- a compatibility indicator is identified, as indicated at block 512.
- a compatibility indicator generally indicates whether the patterns are compatible.
- pattern pairs indicated as incompatible such pattern pairs, or values associated therewith, are provided (e.g., to another device) , stored, or analyzed. This is indicated at block 514.
- FIG. 6 is directed to generating a compatibility index, in accordance with embodiments of the present invention.
- a training corpus including a set of compatible value pairs and a set of incompatible value pairs, is generated.
- patterns are generated for the compatible value pairs and incompatible value pairs to generate pattern pairs in accordance with a plurality of generalization languages.
- compatibility for each pattern pair is determined. As one example, compatibility can be determined using NPMI, as described herein.
- a subset of the plurality of generalization languages is selected. The subset of generalization languages can be selected to reduce the amount of memory required to store data.
- the pattern pairs and corresponding compatibilities for the selected subset of generalization languages are stored in an index.
- computing device 700 an exemplary operating environment for implementing aspects of the technology described herein is shown and designated generally as computing device 700.
- Computing device 700 is just one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the technology described herein. Neither should the computing device 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
- the technology described herein may be described in the general context of computer code or machine-usable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device.
- program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types.
- aspects of the technology described herein may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc.
- aspects of the technology described herein may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
- computing device 700 includes a bus 710 that directly or indirectly couples the following devices: memory 712, one or more processors 714, one or more presentation components 716, input/output (I/O) ports 718, I/O components 720, an illustrative power supply 722, and a radio (s) 724.
- Bus 710 represents what may be one or more busses (such as an address bus, data bus, or combination thereof) .
- FIG. 7 is merely illustrative of an exemplary computing device that can be used in connection with one or more aspects of the technology described herein. Distinction is not made between such categories as “workstation, ” “server, ” “laptop, ” “handheld device, ” etc., as all are contemplated within the scope of FIG. 7 and refer to “computer” or “computing device. ”
- Computer-readable media can be any available media that can be accessed by computing device 700 and includes both volatile and nonvolatile, removable and non-removable media.
- Computer-readable media may comprise computer storage media and communication media.
- Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program sub-modules, or other data.
- Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices.
- Computer storage media does not comprise a propagated data signal.
- Communication media typically embodies computer-readable instructions, data structures, program sub-modules, or other data in a modulated data signal such as a carriier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
- Memory 712 includes computer storage media in the form of volatile and/or nonvolatile memory.
- the memory 712 may be removable, non-removable, or a combination thereof.
- Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc.
- Computing device 700 includes one or more processors 714 that read data from various entities such as bus 710, memory 712, or I/O components 720.
- Presentation component (s) 716 present data indications to a user or other device.
- Exemplary presentation components 716 include a display device, speaker, printing component, vibrating component, etc.
- I/O port (s) 718 allow computing device 700 to be logically coupled to other devices including I/O components 720, some of which may be built in.
- Illustrative I/O components include a microphone, joystick, game pad, satellite dish, scanner, printer, display device, wireless device, a controller (such as a keyboard, and a mouse) , a natural user interface (NUI) (such as touch interaction, pen (or stylus) gesture, and gaze detection) , and the like.
- NUI natural user interface
- a pen digitizer (not shown) and accompanying input instrument are provided in order to digitally capture freehand user input.
- the connection between the pen digitizer and processor (s) 714 may be direct or via a coupling utilizing a serial port, parallel port, and/or other interface and/or system bus known in the art.
- the digitizer input component may be a component separated from an output component such as a display device, or in some aspects, the usable input area of a digitizer may be coextensive with the display area of a display device, integrated with the display device, or may exist as a separate device overlaying or otherwise appended to a display device. Any and all such variations, and any combination thereof, are contemplated to be within the scope of aspects of the technology described herein.
- a NUI processes air gestures, voice, or other physiological inputs generated by a user. Appropriate NUI inputs may be interpreted as ink strokes for presentation in association with the computing device 700. These requests may be transmitted to the appropriate network element for further processing.
- a NUI implements any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 700.
- the computing device 700 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 700 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 700 to render immersive augmented reality or virtual reality.
- a computing device may include radio (s) 724.
- the radio 724 transmits and receives radio communications.
- the computing device may be a wireless terminal adapted to receive communications and media over various wireless networks.
- Computing device 700 may communicate via wireless protocols, such as code division multiple access ( “CDMA” ) , global system for mobiles ( “GSM” ) , or time division multiple access ( “TDMA” ) , as well as others, to communicate with other devices.
- the radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection.
- a short-range connection may include a connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol.
- a Bluetooth connection to another computing device is a second example of a short-range connection.
- a long-range connection may include a connection using one or more of CDMA, GPRS, GSM, TDMA, and 802.16 protocols.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Software Systems (AREA)
- Algebra (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Operations Research (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Stored Programmes (AREA)
Abstract
Methods, computer systems, computer-storage media, and graphical user interfaces are provided for facilitating data error detection, according to embodiments of the present invention. In one embodiment, a target data set having a plurality of values for which to identify incompatible data is obtained. A pattern for each of the plurality of values is generated using at least one generalization language. A pair of patterns that represent a pair of values is utilized to identify a compatibility indicator that corresponds with a pair of training patterns in a compatibility index that match the pair of patterns. The compatibility indicator indicates the pair of patterns are incompatible with one another based on a statistical analysis performed in association with a corpus of data external to the target data set. An indication that the values are incompatible with one another is provided.
Description
Data analysts oftentimes desire to identify data errors within a set of data values. For example, data may be collected in a format or variation that is not compatible with the other data. To effectively analyze or consume the data, however, the collected data may be desired to be compatible with one another. For example, compatible data may be desired to effectively perform table searching, data querying, etc. Identifying errors within data, however, is often difficult and error prone. For example, conventional implementations that use regular expression patterns to detect inconsistent values can be error-prone as such techniques make local decisions based only on values in a given input column.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Various aspects of the technology described herein are generally directed to systems, methods, and computer storage media for, among other things, facilitating detection of data errors using existing data external to the data set (e.g., column) being analyzed. In particular, a large corpus of existing data can be utilized to detect co-occurrence statistics. Such statistics can be leveraged to detect errors or incompatibility within a data set, such as a single column of data.
The technology described herein is described in detail below with reference to the attached drawing figures, wherein:
FIG. 1 is a block diagram of an exemplary system for facilitating data error detection, suitable for use in implementing aspects of the technology described herein;
FIG. 2 is an exemplary graphical user interface associated with detected data error, in accordance with aspects of the technology described herein;
FIG. 3 is an example error detection engine in accordance with aspects of the technology described herein;
FIG. 4 is an example of a hierarchical generalization tree, in accordance with aspects of the technology described herein;
FIG. 5 provides an example method for facilitating detection of data errors, in accordance with aspects of the technology described herein;
FIG. 6 provides an example method for generating a compatibility index, in accordance with aspects of the technology described herein; and
FIG. 7 is a block diagram of an exemplary computing environment suitable for use in implementing aspects of the technology described herein.
The technology described herein is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Overview
Various types of data can be collected and/or reported. As one example, types of data collected can include personal information (e.g., name, phone number, address, email address) , computer data (e.g., IP address, MAC address) , transaction data (e.g., date, time, credit card number, ISBN number) , health-related data (e.g., DEA number, drug name) , etc. In many cases, the data is not collected in a common format. For example, date information may be collected, but may not be collected in the same format (e.g., July 4, 2018; 7/4/2018; 07.04.18, etc. ) . Erroneous data, inconsistent data, or incompatible data, however, can present challenges for downstream queries and/or programs. By way of example only, a downstream query or program that produces an aggregate result with a group-by “month” may assume dot-separated data formats, which would extract months by splitting using ‘.’ and taking the second component. Inconsistent date formats (e.g., date formats that are not dot-separated) included in the data set (e.g., column of data) , however, can lead to errors or corruption of downstream results.
Recognizing errors or incompatibilities within data, however, can be difficult and error-prone. Many conventional systems limit error detection to manually-defined rules. Utilizing manually-defined rules can be tedious to generate and limited in functionality. Other conventional systems can detect errors only based on values in the same input column. Such an approach, however, is only based on values in the given input column, which can be result in inaccurate error detection.
Accordingly, embodiments of the present disclosure are directed to facilitating automated detection of data error using existing data. In particular, an extensive corpus of existing data can be analyzed to statistically detect incompatibility indicating data error. Such statistical analysis, or portion thereof, can be stored in an index. For instance, an index may include incompatibility indicators that indicate a level or extent of compatibility between data values and/or patterns representing data values. In accordance with receiving an error detection query (e.g., via a user device) , target data associated therewith can be analyzed to identify any errors. As described, to detect errors, an error detection engine can analyze the set of target data and utilize a compatibility index to detect erroneous data. In particular, values within the target data can be generalized to patterns using a generalization language (s) . Such a pattern can then be used in association with an index to identify erroneous data. For example, a pattern pair representing a value pair in the target data can be used to reference a compatibility indicator in an index that corresponds with a matching pattern pair. Such a compatibility indicator can indicate an extent or measure of compatibility.
Overview of Exemplary Environments for Facilitating Data Error Detection
Referring now to FIG. 1, a block diagram of an exemplary network environment 100 suitable for use in implementing embodiments of the invention is shown. Generally, the system 100 illustrates an environment suitable for facilitating detection of data errors (e.g., erroneous or incompatible data types) by, among other things, using existing data, such as existing data tables. The network environment 100 includes a user device 110, an error detection engine 112, a data store 114, and data sources 116a-116n (referred to generally as data source (s) 116) . The user device 110, the error detection engine 112, the data store 114, and the data sources 116a-116n can communicate through a network 118, which may include any number of networks such as, for example, a local area network (LAN) , a wide area network (WAN) , the Internet, a cellular network, a peer-to-peer (P2P) network, a mobile network, or a combination of networks. The network environment 100 shown in FIG. 1 is an example of one suitable network environment and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the inventions disclosed throughout this document. Neither should the exemplary network environment 100 be interpreted as having any dependency or requirement related to any single component or combination of components illustrated therein. For example, the user device 110 and data sources 116a-116n may be in communication with the error detection engine 112 via a mobile network or the Internet, and the error detection engine 112 may be in communication with data store 114 via a local area network. Further, although the environment 100 is illustrated with a network, one or more of the components may directly communicate with one another, for example, via HDMI (high-definition multimedia interface) , DVI (digital visual interface) , etc. Alternatively, one or more components may be integrated with one another, for example, at least a portion of the error detection engine 112 and/or data store 114 may be integrated with the user device 110. For instance, a portion of the error detection engine 312 configured to generate an index (e.g., index manager 316 of FIG. 3) may be integrated with a server in communication with a user device, while the user device may be configured to perform error detection (e.g., via error detection manager 314 of FIG. 3) .
The user device 110 can be any kind of computing device capable of facilitating detection of data errors. For example, in an embodiment, the user device 110 can be a computing device such as computing device 700, as described above with reference to FIG. 7. In embodiments, the user device 110 can be a personal computer (PC) , a laptop computer, a workstation, a mobile computing device, a PDA, a cell phone, or the like.
The user device can include one or more processors, and one or more computer-readable media. The computer-readable media may include computer-readable instructions executable by the one or more processors. The instructions may be embodied by one or more applications, such as application 120 shown in FIG. 1. The application (s) may generally be any application capable of facilitating a data error detection. In some implementations, the application (s) comprises a web application, which can run in a web browser, and could be hosted at least partially server-side. In addition, or instead, the application (s) can comprise a dedicated application. In some cases, the application is integrated into the operating system (e.g., as a service) .
In embodiments, data error detections may be initiated and/or presented via an application 120 operating on the user device 110. In this regard, the user device 110, via an application 120, might allow a user to initiate a data error detection and to obtain, in response to initiating a data error detection, an indication of one or more data values that may be erroneous. The user device 110 can include any type of application that facilitates data error detection. An application may be a stand-alone application, a mobile application, a web application, or the like. One exemplary application that may be used for detecting data errors, or data suggestions associated therewith, includes a spreadsheet application. In some cases, the functionality described herein may be integrated directly with an application or may be an add-on, or plug-in, to an application.
In an embodiment, the user device 110 is separate and distinct from the error detection engine 112, the data store 114, and the data sources 116 illustrated in FIG. 1. In another embodiment, the user device 110 is integrated with one or more illustrated components. For instance, the user device 110 may incorporate functionality described in relation to the error detection engine 112, such as error detection manager 314. For clarity of explanation, we will describe embodiments in which the user device 110, the error detection engine 112, the data store 114, and the data sources 116 are separate, while understanding that this may not be the case in various configurations contemplated within the present invention.
As described, data error detection refers detection of an error in data, particularly related to an incompatible type or format of data. Error detection is oftentimes desired as quality or compatible data is generally desired within a set of data (e.g., column of data, etc. ) . For instance, inconsistencies in data, or incompatible data, can result in challenges for downstream queries and programs, which often make implicit assumptions on how data should look. By way of example only, given a table having mixed date formatting, a downstream program or query that produces an aggregate result with a group-by on month may assume dot-separated date formats, which would extract months by splitting using “.” and taking a second component in the value. Such utilization, however, may lead to unexpected errors or even corruption of downstream results.
Identification of such error detection may be initiated at the user device 110 in any manner. For instance, upon selection of a set of data (e.g., a column of data) , a “begin” or “search” function button might be selected, for example, by a user via the user interface. By way of example only, a user might select to search for erroneous or incompatible data within the set of data. As another example, identification of erroneous or incompatible data might be automatically initiated.
A set of data for which error detection is applied can be selected in any number of ways. For instance, a user might use a mouse, selector, touch input, or the like to specify a column of data. As another example, a set of data might be automatically selected. By way of example only, assume a table includes several columns of data. In such a case, the values in a first column may be selected to detect erroneous data.
A set of data values can be provided as, or as part of, an error detection query to initiate an error detection process. For instance, a set of data values might be included as an error detection query to result in one or more indications of incompatible data. For example, upon selecting a set of data as well as a “begin” or “go” button or icon, the selected data can be provided to the error detection engine 112 for use in detecting erroneous data.
The user device 110 communicates with the error detection engine 112 to facilitate identification of erroneous or incompatible data. In embodiments, for example, a user utilizes the user device 110 to initiate a search for erroneous errors via the network 118. For instance, in some embodiments, the network 118 might be the Internet, and the user device 110 interacts with the error detection engine 112 to obtain indications of predicted data errors, or data suggestions thereof. In other embodiments, for example, the network 118 might be an enterprise network associated with an organization. It should be apparent to those having skill in the relevant arts that any number of other implementation scenarios may be possible as well.
With continued reference to FIG. 1, the error detection engine 112 generally provides indications of predicted data errors. Generally, the error detection engine 112 analyzes a set of data to identify potential data errors. The detected data errors can be provided to the user device 110 and/or used to correct data or provide suggestions related thereto. The error detection engine 112, according to embodiments, can be implemented as server systems, program modules, virtual machines, components of a server or servers, networks, and the like.
In embodiments, the error detection engine 112 receives error detection queries initiated via the user device 210. Error detection queries received from a user device, such as user device 110, can include error detection queries that were manually or explicitly input by the user (input queries) as well as error detection queries that were automatically generated. By way of example, an error detection query might be specified by a user based on the user selecting a set of data, such as a column of data. Error detection queries can additionally or alternatively be automatically generated and received at the error detection engine 112. For instance, upon detecting a new column in a table having one or more data values, an error detection query might be automatically triggered. Generally, the error detection engine 112 can receive error detection queries from any number of devices.
In accordance with receiving an error detection query (e.g., via the user device 110) , the error detection engine 112 can analyze the data to identify any errors. As described, to detect errors, the error detection engine 112 may analyze a set of data and utilize a compatibility index to detect erroneous data. In particular, values within a set of data can be generalized to patterns using a generalization language (s) . Such a pattern can then be used in association with an index to identify erroneous data.
By way of example only, and with reference to FIG. 2, FIG. 2 illustrates an example user interface 200 associated with a data error notification. As illustrated, column 210 represents various date values. As shown, dates are generally provided in a four-digit year format. However, value 212 includes a month, day and year format (June 11, 2010) . As such, a data error notification 214 can be provided to indicate a potential erroneous data format. Such a data error detection notification 214 can be represented in any manner. For example, the data error detection notification may include a suggested data transformation, a request to remove the data, or the like. This is only one example of potential user interface aspects of embodiments of the present invention and is not intended to limit the scope of the invention.
Turning now to FIG. 3, FIG. 3 illustrates an example error detection engine 312. In embodiments, the error detection engine 312 includes an error detection manager 314 and an index manager 316. According to embodiments of the invention, the error detection engine 312 can include any number of other components not illustrated. In some embodiments, one or more of the illustrated components 314 and 316 can be integrated into a single component or can be divided into a number of different components. Components 314 and 316 can be implemented on any number of machines and can be integrated, as desired, with any number of other functionalities or services. By way of example only, index manager 316 may operate at a server, while error detection manager 314, or aspects thereof, may operate at a user device.
The error detection engine 312 can communicate with the data repository 318. The data repository 318 is configured to store various types of information used by the error detection engine 312. In embodiments, the error detection engine 312 provides data to the data repository 318 for storage, which may be retrieved or referenced by the error detection engine 312. Examples of types of information stored in data repository 318 may include, for example, data tables, data columns, generalization languages, patterns, compatibility indicators, or the like.
The error detection manager 314 is generally configured to facilitate error detection within a data set, such as a target data set. As shown in FIG. 3, the error detection manager 314 may include a pattern generator 320 and an error detector 322. In implementation, the error detection manager 314 can receive as input a target data set 302 for which error detection is desired. As previously described, such a target data set can be selected by a user via a user device or automatically selected. By way of example only, a user may select a column of data as target data for which error detection is desired. As another example, upon launching a spreadsheet or document having a table, one or more target data sets may be automatically selected and provided as input for error detection.
Upon obtaining a target data set, such as a set of values within a column, a pattern generator 320 can generate patterns in association with the target data set. A pattern, as used herein, generally refers to a generalized representation of a value. Patterns can be generated for any number of data values in the target data set. For example, in some cases, a pattern might be generated for each data value in the target data set.
In embodiments, a pattern can be generated for a data value in accordance with any number of generalization languages. A generalization language generally refers to method for mapping characters or sets of characters to generate a pattern. As can be appreciated, any number of generalization languages may be utilized to generate patterns. For example, a pattern for a value may be generated for each of a set of generalization languages. In this regard, for a particular value in a target data set, a first generation language may be used to generate a first pattern, and a second generation language may be used to generate a second pattern.
The generalization language or set of generalization languages to utilize for generating patterns may be selected in any number of ways. In some cases, a predetermined set of generalization languages might be utilized to generate patterns for a value. For example, for any data value, a first and second generalization language might be used. As another example, for a particular type of data value, a first and second generalization language might be used, while for another type of data value, a third and fourth generalization language might be used. As described in more detail below, a particular set of generalization languages to use to generate patterns can be selected, determined, or identified based on data compatibility associated with training data.
Generalization languages that might be used to generate patterns may be represented and/or identified via a generalization tree. For example, given an English alphabet ∑ = {α} , FIG. 4 provides one example of a hierarchical generalization tree. In particular, a tree H represents a generalization tree defined over an alphabet ∑, if each of its leaf nodes corresponds to a character α ∈ ∑, and each of its intermediate nodes represents the union of all characters in its children nodes. While only one canonical generalization tree is shown in FIG. 4, there are a variety of ways to generalize a given value v using one generalization tree as different characters can be generalized into different combinations of internal tree nodes. Each distinct generalization can be identified or designated as a generalization language that maps each character to a tree node. In this regard, given a value v = α1α2 … αt and a generalization language L, the value v can be generalized to a pattern by applying the mapping of the generalization language on each character of the value v to produce: L (v) = L (α1) L (α2) … L (αt) .
By way of example only, assume two generalization languages L
1 and L
2 are used to generate a pattern. As shown in FIG. 4, assume L
1 corresponds with the first level of the hierarchy, and L
2 corresponds with the second level of the hierarchy. Such generalization languages can be represented as:
Now assume two values exist in the same column of data, such as v
1 = “2011-01001” and v
2 = “2011.01.02. ” Using the generalization language L
1 and L
2, the following patterns can be generated, respectively:
L
1 (v
1) = “\A [4] -\A [2] -\A [2] ”
L
1 (v
2) = “\A [4] . \A [2] . \A [2] ”
L
2 (v
1) = “\D [4] \S\D [2] ”
L
2 (v
2) = “\D [4] \S\D [2] ”
wherein, for example, “\A [4] ” denotes four consecutive “\A. ”
As can be appreciated, various generalization languages may be desirable for utilizing to generate patterns as the resulting patterns can provide varying coverage of compatibility. For instance, with some types of data values, one generalization language may result in a pattern that detects incompatibility of data, while another generalization language may result in a pattern that detects incompatibility of data. In some cases, multiple generalization languages may be complementary in their coverage such that a set of generalization languages may be desired.
By way of example only, and with reference to the example patterns provided above, a compatibility indicator associated with the two patterns for the first generalization language L
1 may indicate that the two patterns L
1 (v
1) and L
1 (v
2) rarely co-occur in a column and, as such, are incompatible. On the other hand, the two patterns for the second generalization language L
2 are indistinguishable, making the second generalization language L
2 ineffective to detect incompatibility between the two patterns.
As another example, consider another pair of values, v
3 = “2014-01” and v
4 = “July-01. ” Using generalization language L
1, L
1 (v
3) = L
1 (v
4) = “\A [4] -\A [2] , ” which would not detect a data error. In comparison, generalization language L
2 produces L
2 (v
3) = “\D [4] \S\D [2] ” and L
2 (v
4) = “\L [4] \S\D [2] , ” having a compatibility indicator that indicates these two patterns are incompatible. As such, in this example, the second generalization language L2 is more effective in detecting incompatibility between the two patterns.
An error detector 322 is generally configured to detect error or incompatibility within a target data set utilizing generated patterns. In particular, the error detector 322 can access a compatibility index that indicates data compatibility and identify whether corresponding data patterns are indicated as compatible or incompatible. To do so, the error detector 322 may generate pairs of patterns for determining compatibility. In such a case, any number of pattern pairs or data pairs may be generated. For example, in some embodiments, pattern pairs can be generated for each combination of values and/or patterns in the target data set. Although described herein as the error detector 322 generating pattern pairs, pattern pairs can be generated by another component, such as, for example by a pattern generator prior to generating patterns or following pattern generation.
As can be appreciated, pattern pairs can be generated for each generalization language used to generate patterns for data values. In this regard, assume a data value pair includes a first value and a second value. In such a case, a first generalization language can be used to generate a first pattern for the first value and a second pattern for the second value. Similarly, a second generalization language can be used to generate a third pattern for the first value and a fourth pattern for the second value. In such a case, a first pattern pair associated with the first generalization language may be generated, and a second pattern pair associated with the second generalization language may be generated.
Upon determining pattern pairs, the pattern pairs can be used to lookup or identify a corresponding training pattern pair. For instance, assume a pattern pair is <P
1, P
2>. In such a case, a compatibility index may be referenced and used to identify a matching training pattern pair <P
1, P
2> included therein. As can be appreciated, any number of methods can be used to identify and/or lookup a matching pattern pair.
As multiple pattern pairs for a value pair may be generated in association with varying generalization languages, in embodiments, a training pattern pair associated with a same generalization language as the target pattern pair may be identified. That is, assume a pattern pair associated with a first generalization language is generated from the target data set. In such a case, a training pattern pair corresponding with the same first generalization language may be searched for in the compatibility index.
In accordance with identifying a matching training pattern pair, a compatibility indicator associated therewith can be identified. As described, a compatibility indicator provides an indication of compatibility and/or incapability between two patterns. In the regard, a compatibility indicator may indicate a likelihood or frequency of the existence of two patterns in a data set, such as a column. In some cases, a compatibility that exceeds a threshold may indicate that two patterns are compatible with one another, and a compatibility that is lower than the threshold can indicate that the two patterns are incompatible with one another. For example, in some cases, a threshold may be a 0 value, such that compatibility scores above 0 indicate compatibility between patterns, while compatibility scores below 0 indicate incompatibility between patterns. As discussed, two patterns that are indicated as incompatible with one another can indicate a data error.
A compatibility indicator can be determined in any number of ways for training pattern pairs. As one example, a compatibility indicator may be determined using a statistical measure referred to as point-wise mutual information, or PMI. In particular, a pair-wise NPMI score (s
k) can be determined for a pattern pair for a language:
s
k(v
i, v
j) = NPMI (L
k (v
i) , L
k (v
j) )
An example for determining PMI and NPMI is provided herein in relation to value pairs for purposes of illustration, however, PMI and/or NMPI as described herein can additionally or alternatively be determined in association with pattern pairs in a similar manner. Let c (v) = | {C | C ∈ C, v ∈ C} | be the number of columns with value v, and c (v
1, v
2) = | {C | C ∈ C, v1 ∈ C, v2 ∈ C} | be the number of columns with both v
1 and v
2. The probability of seeing the value v in a column can be defined as
and the probability of seeing both v
1 and v
2 in the same column can be defined as
PMI can then be defined as:
Generally, if v
1 and v
2 co-occur completely by random chances, then p (v
1, v
2) = p (v
1) p (v
2) , and thus p (v
1, v
2) /p (v
1) p (v
2) = 1, making PMI (v
1, v
2) = 0, thereby indicating no statistical correlation. If v
1 and v
2 are positively correlated and co-occur more often, then PMI (v
1, v
2) > 0; otherwise PMI (v
1, v
2) < 0. PMI can be normalized into [-1, 1] using Normalized PMI (NPMI) , defined as
By way of example only, assume v
1 = “2011” , and v
2 = “2012” . Further assume that |C| = 100M columns in the corpus, and c (v
1) = 1M, c (v
2) = 2M, c (v
1, v
2) = 500K, respectively. In such a case, the following probabilities can be computed p (v
1) = 0.01, p (v
2) = 0.02, and p (v
1, v
2) = 0.005, from which NPMI (v
1, v
2) = 0.60 > 0 can be calculated, indicating a strong statistical co-occurrence. This suggests that the two values are highly compatible in the same columns. As another example, assume v
1 = “2011” , and v
3 = “January-01” . In such a case, NPMI (v
1, v
3) can be determined to be -0.47 < 0 because v
1, v
3 rarely co-occur with c (v
1) = 1M, c (v
3) = 2M, and c(v
1, v
3) = 10, suggesting that this pair of values is incompatible.
As described, in some cases, multiple pattern pairs may be generated for a pair of values based on utilization of multiple generalization languages. By way of example only, assume two data values “2011-01-01” and “2011.01.02” are converted to a first pattern pair via a first generalization language and converted to a second pattern pair via a second generalization language. Now assume the compatibility index is searched for both the first pattern pair in association with the first generalization language and the second pattern pair in association with the second generalization language. In such a case, a first compatibility indicator and a second compatibility indicator can be identified for the two data values “2011-01-01” and “2011.01.02. ” Such varying compatibility indicators can be analyzed in any number of ways to identify a final compatibility indicator for the data value pair.
By way of example only, in some implementations, the various compatibility indicators may be aggregated, for example, by determining an average compatibility indicator score. As can be appreciated, in some cases, an average compatibility indicator may not be optimal as different languages generalize values differently. For example, the value pair “2011-01-01” and “2011.01.02” might only be detected using a first generalization language, while another value pair, such as “2014-01” and “July-01” might only be detected using a second generalization language. In another implementation, observing the complementarity of generalization languages (e.g., L
1 and L
2) , one approach is to use each language, but predict a pair of values as incompatible when at least one language is confident (producing a low s
k (v
i, v
j) score) , and ignore languages that are not confident (with high NPMI scores) , because each generalization language may result in values that are difficult to differentiate. For instance, for a set of languages, if one language predicts two values are not compatible (e.g., less than a threshold) , overall the two values are predicted as incompatible, regardless of predictions produced in association with other languages.
Based on an indication of compatibility for a pattern pair and/or value pair, an indication of the compatibility between the two patterns or values associated therewith, can be stored or provided to another component, such as a server or user device. In instances when provided to a user device, a user may view the compatibility indicator (s) and decide whether the data is indeed incompatible or erroneous. In additional or alternative embodiments, the indication of compatibility can be used to generate an indication of an erroneous value, a data modification and/or a data removal. For example, upon determining a pair of values are incompatible, a determination of a specific erroneous value may be made as well as a recommendation to remove the erroneous value or a recommendation for a data correction (e.g., transform the data value into a different format) may be provided.
As generally described, a compatibility index is referenced and utilized to detect data error, or data incompatibility. As such, the index manager 316 is configured to generate and manage the compatibility index. As described, a compatibility index generally refers to an index or data structure that includes compatibility indicators indicating compatibility between two patterns and/or values (also referred to herein as pattern pairs and value pairs) . In accordance with embodiments described herein, compatibility indicators within an index are generally generated based on compatibility of patterns and/or values in historical data, that is, existing data. In this manner, existing data from various data sources (e.g., external data sources, web data sources, etc. ) can be analyzed to identify whether such data is compatible with one another. As such, incompatibility or error detection can be based on a more global collection of data as opposed to restricting error detection to other data included in the data set being analyzed.
To generate a compatibility index, index manager 316 may include a data trainer 330, a pattern generator 332, a compatibility identifier 334, a pattern selector 336, and an index generator 338. To generate a compatibility index for use in detecting erroneous data, a training corpus can be generated. A data trainer 330 is generally configured to generate a training data corpus.
To generate a training data corpus, the data trainer 330 may initially obtain or access existing data, for example, via the Internet and/or within an Enterprise. For instance, a corpus with over 100 million web tables can be extracted from a web page index of a search engine. As data error detection is generally described herein as being detected within a single data set, such as a single column, tables can be decomposed into individual data sets, or columns.
In embodiments, sets of data, such as columns, having values that are verified to be statistically compatible can be selected. By way of example only, given a set of columns C, a set of columns C+ can be selected having values that are verified to be statistically compatible. In this regard, the initial data can be analyzed to remove data sets (e.g., columns of data) that do not have statistically compatible data. To determine statistical compatibility, NPMI scores can be determined and used to verify statistical compatibility. As such, co-occurrence and PMI or NMPI scores can be calculated for all data pair variations. In this way, data within existing data sets can be verified as compatible to one another.
Such data sets with statistically compatible data can then be used to generate training examples. Training examples generally refer to pairs of data values that include compatible data or incompatible data. As compatibility is verified within a data set, pairs of data values within a single data set (e.g., column) can be used to generate compatible pairs of data. For example, assume a data set includes A, B, and C. In such a case, value pairs may include (A, B) , (A, C) , and (B, C) . Any number of pairs of data values from within a data set can be utilized to generate compatible pairs of data.
To generate incompatible pairs of data, a value within a data set can be mixed with values in another data set (e.g., column) to produce a synthetic data set (e.g., column) . In such a case, it is likely that the synthetic data set will include a sole value that is incompatible with the other values in the data set. As such, the incompatible value can be paired with each of the other values to generate incompatible value pairs. As can be appreciated, such incompatibility can be verified for example, by comparing the implanted value with the other values. As such, a set of compatible pairs of data and a set of incompatible pairs of data are generated as a training set of data included in the training data corpus.
The pattern generator 332 is generally configured to generate patterns in association with data values. In this regard, upon obtaining a training data corpus having compatible value pairs and incompatible value pairs, the pattern generator 332 can generate patterns for the value pairs. The pattern generator 332 can generate patterns for each value in association with any number of generalization languages.
As can be appreciated, any number of generalization languages may be utilized to generate patterns. For example, a pattern for a value may be generated for each of a set of generalization languages. In this regard, for a particular value pair, a first generation language may be used to generate a first pattern pair (that corresponds with the value pair) , and a second generation language may be used to generate a second pattern pair (that corresponds with the value pair) .
The generalization language or set of generalization languages to utilize for generating patterns may be selected in any number of ways. In some cases, a predetermined set of generalization languages might be utilized to generate patterns for a value. For example, each generalization language (e.g., in a generalization tree) , might be used. Generalization languages that might be used to generate patterns may be represented and/or identified via a generalization tree. By way of example only, assume two generalization languages L
1 and L
2 are used to generate a pattern. As shown in FIG. 4, assume L
1 corresponds with the first level of the hierarchy, and L
2 corresponds with the second level of the hierarchy. Such generalization languages can be represented as:
Now assume two values exist in the same column of data, such as v
1 = “2011-01001” and v
2 = “2011.01.02. ” Using the generalization language L
1 and L
2, the following patterns can be generated, respectively:
L
1 (v
1) = “\A [4] -\A [2] -\A [2] ”
L
1 (v
2) = “\A [4] . \A [2] . \A [2] ”
L
2 (v
1) = “\D [4] \S\D [2] ”
L
2 (v
2) = “\D [4] \S\D [2] ”
wherein, for example, “\A [4] ” denotes four consecutive “\A. ”
As can be appreciated, various generalization languages may be desirable for utilizing to generate patterns as the resulting patterns can provide varying coverage of compatibility. For instance, with some types of data values, one generalization language may result in a pattern that detects incompatibility of data, while another generalization language may result in a pattern that detects incompatibility of data. In some cases, multiple generalization languages may be complementary in their coverage such that a set of generalization languages may be desired.
By way of example only, and with reference to the example patterns provided above, a compatibility indicator associated with the two patterns for the first generalization language L
1 may indicate that the two patterns L
1 (v
1) and L
1 (v
2) rarely co-occur in a column and, as such, are incompatible. On the other hand, the two patterns for the second generalization language L
2 are indistinguishable, making the second generalization language L
2 ineffective to detect incompatibility between the two patterns.
As another example, consider another pair of values, v
3= “2014-01” and v
4 = “July-01. ” Using generalization language L
1, L
1 (v
3) = L
1 (v
4) = “\A [4] -\A [2] , ” which would not detect a data error. In comparison, generalization language L
2 produces L
2 (v
3) = “\D [4] \S\D [2] ” and L
2 (v
4) = “\L [4] \S\D [2] , ” having a compatibility indicator that indicates these two patterns are incompatible. As such, in this example, the second generalization language L
2 is more effective in detecting incompatibility between the two patterns.
Although generally described herein as identifying compatible training examples and incompatible training examples and then identify patterns for the pairs, as can be appreciated, in other embodiments, patterns can be identified for values and then joined into pairs.
The compatibility identifier 334 is generally configured to identify compatibility between pattern pairs and/or value pairs. In this regard, for a pattern pair generated in association with a particular generalization language, the compatibility identifier 334 can identify compatibility between the two patterns, or values associated therewith.
In accordance with identifying a pattern pair (a pair of patterns generated from a pair of values in association with a generalization language) , a compatibility associated therewith can be identified. As described, a compatibility indicator provides an indication of compatibility and/or incompatibility between two patterns and/or corresponding values. In some cases, a compatibility that exceeds a threshold may indicate that two patterns and/or values are compatible with one another, and a compatibility score that is lower than the threshold can indicate that the two patterns and/or values are incompatible with one another. For example, in some cases, a threshold may be a 0 value, such that compatibility scores above 0 indicate compatibility between patterns and/or corresponding values, while compatibility scores below 0 indicate incompatibility between the patterns and/or corresponding values. As discussed, two patterns and/or values that are indicated as incompatible with one another can indicate a data error.
A compatibility score or indicator can be generated in any number of ways. As one example, as described above, a compatibility indicator may be determined using a statistical measure referred to as point-wise mutual information, or PMI. In particular, a pair-wise NPMI score (s
k) can be determined for a pattern pair for a language:
s
k(v
i, v
j) = NPMI (L
k (v
i) , L
k (v
j) )
An example for determining PMI and NPMI is provided herein in relation to value pairs for purposes of illustration, however, PMI and/or NMPI as described herein can additionally or alternatively be determined in association with pattern pairs. Let c (v) = | {C | C ∈ C, v ∈ C} | be the number of columns with value v, and c (v
1, v
2) = | {C | C ∈ C, v1 ∈ C, v2 ∈ C} | be the number of columns with both v
1 and v
2. The probability of seeing the value v in a column can be defined as
and the probability of seeing both v
1 and v
2 in the same column can be defined as
PMI can then be defined as:
Generally, if v
1 and v
2 co-occur completely by random chances, then p (v
1, v
2) = p (v
1) p (v
2) , and thus p (v
1, v
2) /p (v
1) p (v
2) = 1, making PMI (v
1, v
2) = 0, thereby indicating no statistical correlation. If v
1 and v
2 are positively correlated and co-occur more often, then PMI (v
1, v
2) > 0; otherwise PMI (v
1, v
2) < 0. PMI can be normalized into [-1, 1] using Normalized PMI (NPMI) , defined as
By way of example only, assume v
1 = “2011” , and v
2 = “2012” . Further assume that |C| = 100M columns in the corpus, and c (v
1) = 1M, c (v
2) = 2M, c (v
1, v
2) = 500K, respectively. In such a case, the following probabilities can be computed p (v
1) = 0.01, p (v
2) = 0.02, and p (v
1, v
2) = 0.005, from which NPMI (v
1, v
2) = 0.60 > 0 can be calculated, indicating a strong statistical co-occurrence. This suggests that the two values are highly compatible in the same columns. As another example, assume v
1 = “2011” , and v
3 = “January-01” . In such a case, NPMI (v
1, v
3) can be determined to be -0.47 < 0 because v
1, v
3 rarely co-occur with c (v
1) = 1M, c (v
3) = 2M, and c(v
1, v
3) = 10, suggesting that this pair of values is incompatible.
Generally, computing NPMI as the compatibility of two patterns L (v
1) and L (v
2) is reliable, particularly when enough data exists with large occurrence count of c (L (v
1) ) and c (L (v
2) ) . However, due to data sparsity, in some cases c (L (v
1) ) , c (L (v
2) ) and c (L (v
1) , L (v
2) ) all →0. In such case, NPMI scores might fluctuate substantially with small changes of c (L (v
1) , L (v
2) ) . Accordingly, in one embodiment, co-occurrence counts can be smoothed out using a technique known as smoothing. For instance, Jelinek-Mercer smoothing can be utilized. Jelinek-Mercer computes a weighted sum of the observed c (L (v
1) , L (v
2) ) and its expectation assuming independence
where N is the total number of columns.
where f is the smoothing factor between 0 and 1.
As previously discussed, for each language L, in order to compute NPMI between two patterns L (v
1) , L (v
2) , two types of statistics are used and may be stored in memory: (1) the occurrence count of pattern L (v
1) and L (v
2) in C, respectively, and (2) the co-occurrence count of L (v
1) and L (v
2) in same columns in C. Typically, storing co-occurrence counts in (2) for all pairs with non-zero values as dictionary entries (L (v
1) , L (v
2) ) →Cnt
12) can be expensive, because for many candidate languages there exist hundreds of millions of such pairs. Storing these co-occurrence counts as dictionaries for each language can require hundreds of MB and multiple GB. As such, to further optimize the memory requirement, a probabilistic counting method called count-min (CM) sketch can be used.
Generally, CM sketches maintain a two dimensional array M with w columns and d rows (where wd is substantially smaller than the total number of items for space reduction) . Each row i ∈ [w] is associated with a hash function hi from a family of pairwise independent H. When a key-value pair (k, v) arrives, the entry at row i, column position h
i (k) , written as M [i, h
i (k) ] , can be incremented by v, for all row i ∈ [w] . At query time, the estimated value for a given key k is
It can be shown that by setting
and
it can be guaranteed that
with probability 1-δ, where N = ∑
k∈K v (k) is the total item values. In other words, with high probability
will not overestimate its true value v (k) by too much.
Applying CM sketches to compress co-occurrence can reduce memory sizes used by a generalization language, often by orders of magnitude (e.g., from 4GB to 40MB) , without much impact on counting accuracy or precision/recall loss in error detection.
A pattern selector 336 is generally configured to select a set of generalization languages for which to include in the compatibility index. As described, different generalization languages can have different advantages for detecting different types of incompatibility. One candidate language is to encode everything at the leaf level, which amounts to no generalization. Such a language is more sensitive in detecting issues, but can also lead to false-positives (e.g., detecting “1918-01-01” and “2018-12-31” as incompatible) due to data sparsity. On the other hand, generalizing everything to the root can result in a pattern that is too insensitive to detect any issues. As such, the pattern selector 336 can be configured to select generalization languages with an appropriate balance in the hierarchy (which is generally determined based on the amount of training corpus--the sparser the data, the more need to generalize) .
Further, generalization languages can be advantageously selected due to space capacity as different languages require different amounts of space. For example, the most detailed generalization language at the leaf level, for example, can require over 100GB of memory for co-occurrence statistics, and with more generalizations higher up in the hierarchy, the less space is required. In some cases, error detection is an interactive process on user devices, and as such, the co-occurrence statistics may be memory-resident. From the tree in FIG. 4, 452310333 (6 × 1051) possible generalization languages may result. In practice, although restrictions can be imposed to require classes of characters like [A-Z] to generalize to the same level, 144 candidate languages can still exist. As discussed, these generalization languages take different amounts of spaces, have different precision/recall tradeoffs, and can be partially redundant or complementary to each other. As such, the pattern selector 336 can analyze data and select a best subset of languages from a set of all languages to use for error detection. In this way, a smaller amount of data can be stored in the compatibility index.
One method for selecting a subset of languages includes utilizing dynamic-threshold (DT) aggregation. Using dynamic-threshold (DT) aggregation, a dynamic threshold
can be effectively selected for each generalization language L
k, and cases below the threshold can be predicted as being incompatible. This can be denoted as:
enabling, for each generalization language, trusting of confident predictions and ignoring the less confident predictions.
and
can be defined similarly. One method to aggregate results H
k across all generalization languages is to union the results as a confident prediction from one generalization language alone can be enough. For a given set of generalization languages L′ and their associated thresholds, precision and recall can be calculated using the labels in T, as
Another method for selecting a subset of languages includes utilizing static-threshold (DT) aggregation. Instead of allowing each generalization language L
k ∈ L′ to pick a separate threshold, while optimizing the union of the predictions in L′ to maximize recall while maintaining a precision P, each language L
k ∈ L′ can be required to be of at least precision P on T. This is equivalent to finding a
such that:
Note that because labeled examples are generated, given a precision requirement P,
can be statically computed for each language L
k:
Because for a fixed P,
can be uniquely determined,
can be written as
for short to denote the set of incompatible examples covered by L
k (and likewise
) , when the context of P is clear.
By way of example only, and with reference to Tables 1 and 2 below, Table 1 shows an example T, where
(compatible examples) and
(incompatible examples) . Each t
i corresponds to a pair of cell values, defined as
Table 1: Generated training examples, where
Scores are provided based on NPMI from generalization using L
j.
Now assume a precision requirement P = 0.75 is given. Based on the above equation, we can get
where the precision is:
Table 2: Example of language selection
Now that
are uniquely determined, the optimization question becomes select a subset
to maximize the coverage of incompatibility cases in T.
In this regard, given a corpus of table columns C, a generalization tree H, and a set of candidate languages L induced by H, a subset of languages can be selected
where each L
k has a precision requirement of P, such that the union can detect as many single-column compatibility errors as possible on the training set T, subject to a memory budget of M. Stated differently, a subset of languages can be selected by:
In some embodiments, a greedy approach may be used to iteratively find a generalization language from a candidate set of generalization languages L
C. One such algorithm is provided as follows:
In this example, the first portion of this algorithm (lines 2-7) iteratively find a language L from the candidate set, whose addition into the current selected set of candidate language G will result in the largest incremental gain, defined as the coverage of new incompatibility cases divided by language size, which can be written as:
The candidate set can be iteratively expanded until no further generalization language candidates can be found without violating memory constraints. Additionally, a best single language can be computed:
The coverage of L
k and G can be compared, and used to select the better option as the selected L′ (line 9-12) .
By way of example only, and with reference to Table 2 (above) , assume a memory size constraint M = 500MB and precision requirement P = 0.75 are desired. Thresholds
and their coverage
can be computed. Using Algorithm 1 above, L
1 can be initially selected into G because it achieves the largest
In the second iteration, only L
2 can be selected into G, because L
3 is too large in size (200+400>500) . Now, G = {L1, L2} as the first candidate, and it covers five negative samples in total (t
1 to t
5) . Then, {L3} can be selected as the best singleton, because it alone has the best coverage (4) . Finally, two candidate sets can be compared, with candidate set {L1, L2} being output as the final selected languages because such generalization languages outperform {L3} .
In accordance with selecting a subset of generalization languages, the index generator 338 can generate a compatibility index for subsequent use in detecting data errors. A compatibility index may include various types of data. In embodiments, a compatibility index includes a set of pattern pairs and corresponding compatibility indicators. As can be appreciated, pattern pairs generated in association with each selected generalization language can be included in such a compatibility index, or set of indices. For example, assume a first generalization language and second generalization language are identified as optimally being used to detect compatibility. In such a case, pattern pairs and corresponding compatibility indicators associated with the first generalization language and the second generalization language can be included in the compatibility index.
Exemplary Implementations for Facilitating Data Error Detection
As described, various implementations can be used in accordance with embodiments of the present invention. FIGS. 5-6 provide methods of facilitating data error detection, in accordance with embodiments described herein. The methods 500 and 600 can be performed by a computer device, such as device 700 described below. The flow diagrams represented in FIGS. 5-6 are intended to be exemplary in nature and not limiting.
Turning initially to method 500 of FIG. 5, method 500 is directed to facilitating data error detection, in accordance with embodiments of the present invention. Initially, at block 502, a target data set for which to identify incompatible data is obtained. A target data set may be selected by a user or automatically selected. At block 504, patterns that represent the data values in the target data set are generated. In embodiments, such patterns can be generated in accordance with any number of generalization languages. At block 506, pattern pairs are generated. Each pattern pair includes a pair of patterns that represent a pair of values within the target data set. In some embodiments, pattern pairs can be generated for each combination of values in the data set. At block 508, a compatibility index is referenced. Thereafter, at block 510, for each pattern pair, a matching pattern pair is searched for in the compatibility index. For each of the identified matching pattern pairs in the compatibility index, a compatibility indicator is identified, as indicated at block 512. A compatibility indicator generally indicates whether the patterns are compatible. For pattern pairs indicated as incompatible, such pattern pairs, or values associated therewith, are provided (e.g., to another device) , stored, or analyzed. This is indicated at block 514.
With reference to method 600 of FIG. 6, FIG. 6 is directed to generating a compatibility index, in accordance with embodiments of the present invention. Initially, at block 602, a training corpus, including a set of compatible value pairs and a set of incompatible value pairs, is generated. At block 604, patterns are generated for the compatible value pairs and incompatible value pairs to generate pattern pairs in accordance with a plurality of generalization languages. At block 606, compatibility for each pattern pair is determined. As one example, compatibility can be determined using NPMI, as described herein. At block 608, a subset of the plurality of generalization languages is selected. The subset of generalization languages can be selected to reduce the amount of memory required to store data. At block 610, the pattern pairs and corresponding compatibilities for the selected subset of generalization languages are stored in an index.
Overview of Exemolary Ooerating Environment
Having briefly described an overview of aspects of the technology described herein, an exemplary operating environment in which aspects of the technology described herein may be implemented is described below in order to provide a general context for various aspects of the technology described herein.
Referring to the drawings in general, and initially to FIG. 7 in particular, an exemplary operating environment for implementing aspects of the technology described herein is shown and designated generally as computing device 700. Computing device 700 is just one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the technology described herein. Neither should the computing device 700 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated.
The technology described herein may be described in the general context of computer code or machine-usable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Aspects of the technology described herein may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Aspects of the technology described herein may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With continued reference to FIG. 7, computing device 700 includes a bus 710 that directly or indirectly couples the following devices: memory 712, one or more processors 714, one or more presentation components 716, input/output (I/O) ports 718, I/O components 720, an illustrative power supply 722, and a radio (s) 724. Bus 710 represents what may be one or more busses (such as an address bus, data bus, or combination thereof) . Although the various blocks of FIG. 7 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, one may consider a presentation component such as a display device to be an I/O component. Also, processors have memory. The inventors hereof recognize that such is the nature of the art, and reiterate that the diagram of FIG. 7 is merely illustrative of an exemplary computing device that can be used in connection with one or more aspects of the technology described herein. Distinction is not made between such categories as “workstation, ” “server, ” “laptop, ” “handheld device, ” etc., as all are contemplated within the scope of FIG. 7 and refer to “computer” or “computing device. ”
Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Computer storage media does not comprise a propagated data signal.
Communication media typically embodies computer-readable instructions, data structures, program sub-modules, or other data in a modulated data signal such as a carriier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Illustrative I/O components include a microphone, joystick, game pad, satellite dish, scanner, printer, display device, wireless device, a controller (such as a keyboard, and a mouse) , a natural user interface (NUI) (such as touch interaction, pen (or stylus) gesture, and gaze detection) , and the like. In aspects, a pen digitizer (not shown) and accompanying input instrument (also not shown but which may include, by way of example only, a pen or a stylus) are provided in order to digitally capture freehand user input. The connection between the pen digitizer and processor (s) 714 may be direct or via a coupling utilizing a serial port, parallel port, and/or other interface and/or system bus known in the art. Furthermore, the digitizer input component may be a component separated from an output component such as a display device, or in some aspects, the usable input area of a digitizer may be coextensive with the display area of a display device, integrated with the display device, or may exist as a separate device overlaying or otherwise appended to a display device. Any and all such variations, and any combination thereof, are contemplated to be within the scope of aspects of the technology described herein.
A NUI processes air gestures, voice, or other physiological inputs generated by a user. Appropriate NUI inputs may be interpreted as ink strokes for presentation in association with the computing device 700. These requests may be transmitted to the appropriate network element for further processing. A NUI implements any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 700. The computing device 700 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 700 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 700 to render immersive augmented reality or virtual reality.
A computing device may include radio (s) 724. The radio 724 transmits and receives radio communications. The computing device may be a wireless terminal adapted to receive communications and media over various wireless networks. Computing device 700 may communicate via wireless protocols, such as code division multiple access ( “CDMA” ) , global system for mobiles ( “GSM” ) , or time division multiple access ( “TDMA” ) , as well as others, to communicate with other devices. The radio communications may be a short-range connection, a long-range connection, or a combination of both a short-range and a long-range wireless telecommunications connection. When we refer to “short” and “long” types of connections, we do not mean to refer to the spatial relation between two devices. Instead, we are generally referring to short range and long range as different categories, or types, of connections (i.e., a primary connection and a secondary connection) . A short-range connection may include a
connection to a device (e.g., mobile hotspot) that provides access to a wireless communications network, such as a WLAN connection using the 802.11 protocol. A Bluetooth connection to another computing device is a second example of a short-range connection. A long-range connection may include a connection using one or more of CDMA, GPRS, GSM, TDMA, and 802.16 protocols.
The technology described herein has been described in relation to particular aspects, which are intended in all respects to be illustrative rather than restrictive.
Claims (20)
- A computing system comprising:a processor; andcomputer storage memory having computer-executable instructions stored thereon which, when executed by the processor, configure the computing system to:obtain a target data set having a plurality of values for which to identify incompatible data;generate a pattern for each of the plurality of values using at least one generalization language;utilize a pair of patterns that represent a pair of values to identify a compatibility indicator that corresponds with a pair of training patterns in a compatibility index that match the pair of patterns, the compatibility indicator indicating the patterns of the pair of patterns are incompatible with one another based on a statistical analysis performed in association with a corpus of data external to the target data set; andprovide an indication that the pair of values are incompatible with one another.
- The computing system of claim 1, wherein the corpus of data external to the target data set includes data tables available on the web.
- The computing system of claim 1, wherein the target data set is selected by a user.
- The computing system of claim 1, wherein the target data set is automatically selected.
- The computing system of claim 1, wherein the at least one generalization language provides a mapping of characters or sets of characters to generate the pattern.
- The computing system of claim 1, wherein the at least one generalization language comprises a first generalization language and a second generalization language within a generalization language hierarchy.
- The computing system of claim 1, wherein the compatibility indicator is generated using normalized pointwise mutual information (NPMI) .
- The computing system of claim 1, wherein the compatibility indicator is generated using co-occur statistics.
- The computing system of claim 1, wherein the at least one generalization language comprises a set of generalization languages determined to exceed a precision threshold and within a memory constraint.
- A computer-implemented method for facilitating data error detection, the method comprising:generating a training corpus including a set of compatible value pairs and a set of incompatible value pairs;generating patterns for the compatible value pairs and incompatible value pairs to generate pattern pairs in accordance with a plurality of generalization languages;determining a compatibility score for each pattern pair;selecting a subset of the plurality of generalization languages; andgenerating a compatibility index including the pattern pairs and corresponding compatibilities scores for the selected subset of the plurality of generalization languages.
- The method of claim 10, wherein the training corpus is generated using external data sources.
- The method of claim 10, wherein a pattern represents a value in a generalized manner.
- The method of claim 10, wherein the compatibility score is determined using normalized pointwise mutual information (NPMI) .
- The method of claim 10, wherein the subset of the plurality of generalization languages is selected based on a precision threshold and a memory constraint.
- The method of claim 10, wherein the compatibility indicator is generated using co-occur statistics.
- One or more computer storage media having computer-executable instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform a method for facilitating error detection, the method comprising:obtain a target data set having a plurality of values for which to identify incompatible data;generate a pattern for each of the plurality of values using at least one generalization language;utilize a pair of patterns that represent a pair of values to identify a compatibility indicator that corresponds with a pair of training patterns in a compatibility index that match the pair of patterns, the compatibility indicator indicating the patterns of the pair of patterns are incompatible with one another based on normalized pointwise mutual information (NPMI) ; andprovide an indication that the pair of values are incompatible with one another.
- The media of claim 16, wherein the corpus of data external to the target data set includes data tables available on the web.
- The media of claim 16, wherein the target data set is selected by a user.
- The media of claim 16, wherein the target data set is automatically selected.
- The media of claim 16, wherein the at least one generalization language provides a mapping of characters or sets of characters to generate the pattern.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/073495 WO2019140652A1 (en) | 2018-01-19 | 2018-01-19 | Facilitating detection of data errors using existing data |
EP18901089.5A EP3740870A4 (en) | 2018-01-19 | 2018-01-19 | Facilitating detection of data errors using existing data |
US15/781,063 US11275649B2 (en) | 2018-01-19 | 2018-01-19 | Facilitating detection of data errors using existing data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/073495 WO2019140652A1 (en) | 2018-01-19 | 2018-01-19 | Facilitating detection of data errors using existing data |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019140652A1 true WO2019140652A1 (en) | 2019-07-25 |
Family
ID=67301855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/073495 WO2019140652A1 (en) | 2018-01-19 | 2018-01-19 | Facilitating detection of data errors using existing data |
Country Status (3)
Country | Link |
---|---|
US (1) | US11275649B2 (en) |
EP (1) | EP3740870A4 (en) |
WO (1) | WO2019140652A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10990470B2 (en) | 2018-12-11 | 2021-04-27 | Rovi Guides, Inc. | Entity resolution framework for data matching |
US11314609B2 (en) * | 2019-10-24 | 2022-04-26 | EMC IP Holding Company LLC | Diagnosing and remediating errors using visual error signatures |
US11467943B2 (en) * | 2019-12-22 | 2022-10-11 | GlassBox Ltd. | System and method for struggle identification |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1598967A (en) * | 2003-09-18 | 2005-03-23 | 国际商业机器公司 | System and method for detecting multiple data bit errors in memory |
KR20070113041A (en) * | 2006-05-24 | 2007-11-28 | 엘지전자 주식회사 | Method and apparatus for detecting error of (an) image display device |
JP2013050836A (en) * | 2011-08-31 | 2013-03-14 | Nec Corp | Storage system, method for checking data integrity, and program |
CN104461761A (en) * | 2014-12-08 | 2015-03-25 | 北京奇虎科技有限公司 | Data verifying method, device and server |
US20160078367A1 (en) | 2014-10-15 | 2016-03-17 | Brighterion, Inc. | Data clean-up method for improving predictive model training |
US20160260026A1 (en) | 2013-10-08 | 2016-09-08 | National Institute Of Information And Communications Technology | Device for collecting contradictory expressions and computer program therefor |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8386401B2 (en) * | 2008-09-10 | 2013-02-26 | Digital Infuzion, Inc. | Machine learning methods and systems for identifying patterns in data using a plurality of learning machines wherein the learning machine that optimizes a performance function is selected |
US8635197B2 (en) * | 2011-02-28 | 2014-01-21 | International Business Machines Corporation | Systems and methods for efficient development of a rule-based system using crowd-sourcing |
US10198471B2 (en) * | 2015-05-31 | 2019-02-05 | Microsoft Technology Licensing, Llc | Joining semantically-related data using big table corpora |
US11416801B2 (en) * | 2017-11-20 | 2022-08-16 | Accenture Global Solutions Limited | Analyzing value-related data to identify an error in the value-related data and/or a source of the error |
US11392894B2 (en) * | 2018-10-19 | 2022-07-19 | Oracle International Corporation | Systems and methods for intelligent field matching and anomaly detection |
-
2018
- 2018-01-19 WO PCT/CN2018/073495 patent/WO2019140652A1/en unknown
- 2018-01-19 EP EP18901089.5A patent/EP3740870A4/en active Pending
- 2018-01-19 US US15/781,063 patent/US11275649B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1598967A (en) * | 2003-09-18 | 2005-03-23 | 国际商业机器公司 | System and method for detecting multiple data bit errors in memory |
KR20070113041A (en) * | 2006-05-24 | 2007-11-28 | 엘지전자 주식회사 | Method and apparatus for detecting error of (an) image display device |
JP2013050836A (en) * | 2011-08-31 | 2013-03-14 | Nec Corp | Storage system, method for checking data integrity, and program |
US20160260026A1 (en) | 2013-10-08 | 2016-09-08 | National Institute Of Information And Communications Technology | Device for collecting contradictory expressions and computer program therefor |
US20160078367A1 (en) | 2014-10-15 | 2016-03-17 | Brighterion, Inc. | Data clean-up method for improving predictive model training |
CN104461761A (en) * | 2014-12-08 | 2015-03-25 | 北京奇虎科技有限公司 | Data verifying method, device and server |
Non-Patent Citations (1)
Title |
---|
See also references of EP3740870A4 |
Also Published As
Publication number | Publication date |
---|---|
EP3740870A1 (en) | 2020-11-25 |
EP3740870A4 (en) | 2021-09-08 |
US11275649B2 (en) | 2022-03-15 |
US20190250984A1 (en) | 2019-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11586637B2 (en) | Search result processing method and apparatus, and storage medium | |
US11113633B2 (en) | Machine learning system and methods for determining confidence levels of personal information findings | |
US20220382752A1 (en) | Mapping Natural Language To Queries Using A Query Grammar | |
US11797620B2 (en) | Expert detection in social networks | |
US10296546B2 (en) | Automatic aggregation of online user profiles | |
US10133813B2 (en) | Form value prediction utilizing synonymous field recognition | |
US20200242444A1 (en) | Knowledge-graph-embedding-based question answering | |
US9026518B2 (en) | System and method for clustering content according to similarity | |
US9171081B2 (en) | Entity augmentation service from latent relational data | |
US9047347B2 (en) | System and method of merging text analysis results | |
US11113275B2 (en) | Verifying text summaries of relational data sets | |
US8983898B1 (en) | Extracting instance attributes from text | |
US11275649B2 (en) | Facilitating detection of data errors using existing data | |
AU2022204589B2 (en) | Multiple input machine learning framework for anomaly detection | |
CA3085463A1 (en) | Search engine for identifying analogies | |
Bellare et al. | Active sampling for entity matching with guarantees | |
US12094232B2 (en) | Automatically determining table locations and table cell types | |
Ding et al. | Improved density peaks clustering based on natural neighbor expanded group | |
US11830270B1 (en) | Machine learning systems for auto-splitting and classifying documents | |
WO2022255902A1 (en) | Method and system for obtaining a vector representation of an electronic document | |
WO2024220417A1 (en) | Machine learning systems for auto-splitting and classifying documents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18901089 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018901089 Country of ref document: EP Effective date: 20200819 |