US20210279215A1 - Systems and methods for providing data quality management - Google Patents

Systems and methods for providing data quality management Download PDF

Info

Publication number
US20210279215A1
US20210279215A1 US17/326,803 US202117326803A US2021279215A1 US 20210279215 A1 US20210279215 A1 US 20210279215A1 US 202117326803 A US202117326803 A US 202117326803A US 2021279215 A1 US2021279215 A1 US 2021279215A1
Authority
US
United States
Prior art keywords
data
rules
data elements
rule
assessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/326,803
Inventor
Yatindra NATH
Ankur Garg
Rajeev Tiwari
Pranav Vrat
Amit Mohanty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital One Services LLC
Original Assignee
Capital One Services LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital One Services LLC filed Critical Capital One Services LLC
Priority to US17/326,803 priority Critical patent/US20210279215A1/en
Assigned to CAPITAL ONE SERVICES, LLC reassignment CAPITAL ONE SERVICES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VRAT, PRANAV, MOHANTY, AMIT, GARG, ANKUR, NATH, YATINDRA, TIWARI, RAJEEV
Publication of US20210279215A1 publication Critical patent/US20210279215A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/122File system administration, e.g. details of archiving or snapshots using management policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data

Definitions

  • the present disclosure generally relates to providing improved data quality management solutions for organizations, and more specifically, to developing automated and contingent rule-making processes to assess data quality for use in organizational decision making.
  • An organization may wish to collect, store, monitor, and analyze data. As the amount of data grows, data amalgamation mechanisms are increasingly relied upon in order to form organizational and corporate strategic decisions. While current data assembly mechanisms may allow for collection of raw data, there exist some shortcomings. For example, collected raw data may not be readily usable, and may need to be modified or summarized prior to analysis or synthesis. Moreover, collected data may constitute low quality data, and errors may occur during collection and/or transformation of data. For example, errors related to data quality may include incomplete data (where data has not been pulled from one source), incorrect transformations to a data element, wrong manual entry of the data, and errors in calculations. Further, at present it is difficult to assess data quality for a large dataset and focus is usually on individual tables of data. With such errors, difficulties inassessment, and without such transformation, premature use of data for analysis may cause poor organizational decision-making resulting in significant monetary costs and reputational damage to a company.
  • an example variable is an annual percentage rate (APR)
  • manual rules may check whether the contract APR is a number and whether it is greater than zero.
  • the enhanced process described herein may include the creation of suggested rules, including not only a rule to check whether contract APR is a number greater than zero, but also including a rule indicating that the contract APR should not be missing if the contract is finalized. This rule provides a further check, thus improving data quality over that of manual rule implementation. This provides an improvement in data quality and an efficiency gain.
  • an automated end-to-end data application may be preferable in order to allow for streamlined data procurement, analysis using consistent metrics, and monitoring.
  • a user-intuitive point-and-click interface allowing for rapid and efficient monitoring and exploration of significant quantities of data sets and elements.
  • a comprehensive data tool which allows a user to perform diagnosis of data quality directly within the data tool.
  • Current platforms are inefficient, difficult, or even impossible, thus requiring excess operator time and processing resources.
  • the present disclosure is directed at addressing one or more of the shortcomings set forth above and/or other problems of existing hardware systems.
  • the system may include a memory storing instructions, and a processor connected to a network.
  • the processor is configured to execute the instructions to: extract a plurality of first data elements from a data source; generate a data profile based on the first data elements; automatically create a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generate a second set of rules based on the first data elements and the first set of rules; extract a plurality of second data elements; assess the second data elements based on a comparison of the second data elements to the second set of rules; detect defects based on the comparison; analyze data quality according to the detected defects; and transmit signals representing the data quality analysis to a client device for display to a user.
  • the method may be performed by a processor, and may include: extracting a plurality of first data elements from a data source; generating a data profile based on the first data elements; automatically creating a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generating a second set of rules based on the first data elements and the first set of rules; extracting a plurality of second data elements; assessing the second data elements based on a comparison of the second data elements to the second set of rules; detecting defects based on the comparison; analyzing data quality according to the detected defects;
  • the non-transitory computer-readable medium may store instructions executable by one or more processors to perform a method.
  • the method may include: extracting a plurality of first data elements from a data source; generating a data profile based on the first data elements; automatically creating a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generating a second set of rules based on the first data elements and the first set of rules; extracting a plurality of second data elements; assessing the second data elements based on a comparison of the second data elements to the second set of rules; detecting defects based on the comparison; analyzing data quality according to the detected defects; and transmitting signals representing the data quality analysis to a client device for display to a user.
  • GUI graphic user interface
  • the system also includes a processor coupled to the GUI and configured to: extract a plurality of first data elements from a data source; generate a data profile based on the first data elements; determine first data feature settings based on the first user interaction; and create a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • the method includes: displaying, on a graphic user interface (GUI), a plurality of user-adjustable data feature settings to a user; detecting, from the GUI, a first user interaction modifying the data feature settings; extracting, by a processor coupled to the GUI, a plurality of first data elements from a data source; generating, by the processor, a data profile based on the first data elements; determining, by the processor, first data feature settings based on the first user interaction; and creating, by the processor, a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • GUI graphic user interface
  • Yet another aspect of the present disclosure is directed to a non-transitory computer-readable medium storing instructions executable by a processor to provide a graphic user interface (GUI) and perform a method for providing data quality management.
  • the method comprising: displaying, on the GUI, a plurality of user-adjustable data feature settings to a user; detecting, from the GUI, a first user interaction modifying the data feature settings; extracting, by a processor coupled to the GUI, a plurality of first data elements from a data source; generating, by the processor, a data profile based on the first data elements; determining, by the processor, first data feature settings based on the first user interaction; and creating, by the processor, a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • GUI graphic user interface
  • FIG. 1 is a schematic block diagram illustrating an exemplary system for providing data quality management, consistent with the disclosed embodiments
  • FIG. 2 is a schematic block diagram illustrating an exemplary network server, used in the system of FIG. 1 ;
  • FIG. 3 is a schematic block diagram illustrating an exemplary controller, used in the system of FIG. 1 ;
  • FIG. 4 is a diagrammatic illustration of an exemplary graphical user interface used for monitoring and configuring variables related to data quality management
  • FIG. 5 is a diagrammatic illustration of an exemplary graphical user interface used for tracking results related to variable data quality
  • FIG. 6 is a diagrammatic illustration of an exemplary graphical user interface used for tracking details related to variable data quality
  • FIG. 7 is a diagrammatic illustration of an exemplary graphical user interface used for tracking trends related to variable data quality
  • FIG. 8 is a diagrammatic illustration of an exemplary graphical user interface used for tracking decision analysis related to variable data quality
  • FIG. 9 is a diagrammatic illustration of an exemplary graphical user interface used for tracking a decision tree related to variable data quality
  • FIG. 10 is a flow chart illustrating an exemplary process performed by the system in FIG. 1 , consistent with the disclosed embodiments.
  • FIG. 11 is a flow chart illustrating another exemplary process performed by the system in FIG. 1 , in accordance with the disclosed embodiments.
  • FIG. 1 is a schematic block diagram illustrating an exemplary embodiment for providing data quality management, consistent with the disclosed embodiments.
  • system 100 may include one or more personal devices and/or user equipment 120 , a controller 130 , and a network 150 .
  • Personal devices 120 may include personal computing devices such as, for example, desktop computers, notebook computers, mobile devices, tablets, smartphones, wearable devices such as smart watch, smart bracelet, and Google Glass ⁇ [“ ⁇ and any other personal devices. Personal devices 120 may communicate with other parts of system 100 through network 150 . Personal devices 120 may also include software and executable programs configured to communicate with network 150 and customize data quality management for one or more users monitoring and configuring data quality. Other software and executable programs are contemplated.
  • System 100 may allow for one or more personal devices 120 and/or controllers 130 to transfer representative data profiles, historical data, metadata, and customized user-adjustable data quality management features associated with a data quality application (e.g., illustrated in FIGS. 4-9 ) over network 150 to a cloud platform 190 and/or controller 130 .
  • a data quality application e.g., illustrated in FIGS. 4-9
  • System 100 may include mobile or stationary (not shown) personal devices 120 located in residential premises and non-residential premises configured to communicate with network 150 .
  • Personal devices 120 and/or controller 130 may connect to network 150 by Wi-Fi or wireless access points (WAP).
  • WAP Wi-Fi or wireless access points
  • Bluetooth® or similar wireless technology may be contemplated.
  • Network 150 may include a wireless network, such as a cellular network, a satellite network, the Internet, or a combination of these (or other) networks that are used to transport data.
  • network 150 may be a wired network, such as an Ethernet network.
  • Network 150 may transmit, for example, authentication services that enable personal devices 120 and/or controller 130 to access information, and may transmit data quality management instructions according to a representative profile data, created rule data, segment or cluster data, and associated metadata.
  • personal devices 120 and controller 130 may communicate with one or more servers in cloud platform 190 through network 150 .
  • Cloud platform 190 may comprise one or more network servers 160 , third party servers 170 , and/or databases 180 .
  • Servers 160 and 170 may provide cloud services for users and their personal devices 120 and/or controller 130 .
  • a cloud-based architecture may be implemented comprising a distributed portion that executes at another location in network 150 and a corresponding cloud portion that executes on a network server 160 in cloud platform 190 .
  • Servers in cloud platform 190 may also communicate with a transceiver of controller 130 over network 150 using appropriate cloud-based communication protocols, such as Simple Object Access Protocol (SOAP) or Representational State Transfer (REST) and/or other protocols that would be known to those skilled in the art.
  • SOAP Simple Object Access Protocol
  • REST Representational State Transfer
  • Such communication may allow for remote control of data quality management operations of controller 130 by, for example, by identifying representative data profiles and data quality preferences associated with the identified data profiles.
  • Such communication may also allow for remote control of data quality management monitoring operations by, for example, a user operating a GUI on a data quality management application executed on a personal device 120 and/or on controller 130 to configure user-adjustable data feature settings or monitor related variables.
  • network 150 may be accessible to network servers 160 , third party servers 170 , and databases 180 in cloud platform 190 , for sending and receiving information, such as profile data, rule data, and segment data, within system 100 .
  • Network server 160 , third party server 170 , and database 180 may include network, cloud, and/or backup services.
  • network server 160 may include a cloud computing service such as Microsoft AzureTM or Amazon Web Services′[ ⁇ l_ Additional cloud-based wireless access solutions compatible with LTE (e.g., using the 3.5 GHz spectrum in the US) are contemplated.
  • third party server 170 may include a messaging or notification service, for example, that may notify or alert a monitoring user of at least one rule update through the cloud network.
  • a selected rule from a set of applicable rules may be updated and may include at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule, but other rule types are contemplated.
  • a conditional rule (if A, then B) may flag accounts as defective if a variable is missing as the variable cannot be missing if a certain condition is fulfilled.
  • FIG. 2 is a schematic block diagram illustrating an exemplary network server 160 , used in the exemplary system 100 of FIG. 1 . It is contemplated that one or more personal devices 120 may include similar structures described in connection with network server 160 .
  • network server 160 may include, among other things, a processor 220 , personal/output (I/O) devices 230 , a memory 240 , and a database 260 , each coupled to one or more interconnected internal buses (not shown).
  • Memory 240 may store among other things, server programs 244 and an operating system 246 .
  • Server programs 244 may be executed by cloud-based architecture or, alternatively, by a separate software program, such as a data quality management application (as further described with reference to FIGS.
  • Software program 244 may be located in personal devices 120 , or in alternative embodiments, in a controller 130 (as described with reference to FIG. 3 ). Software program 244 may configure remote control and update of user-adjustable data feature settings according to existing profile data, rule data, and segment data.
  • Memory 240 and/or database 260 may store profile data 252 based on individual and/or aggregate data profile behavior.
  • Profile data 252 may be input directly or manually by a user into a data quality management application that is executed on a personal device 120 and/or by a controller 130 .
  • Profile data 252 may also be automatically generated based on extracted data elements.
  • Memory 240 may also store other data and programs.
  • Profile data 252 may include representative data profiles related to organizational information including, for example, affiliated user login and/or other registration identification (ID) or user credentials, authentication timestamp information, network node or access point location(s) and/or preferences, and other metadata generated by algorithms in server programs 244 .
  • ID registration identification
  • server programs 244 may store metadata generated by algorithms in server programs 244 .
  • Memory 240 and/or database 260 may also store rule data 254 and segment data 256 .
  • Rule data 254 and segment data 256 may be directly and manually input to a data quality management application that is executed on a personal device 120 and/or controller 130 .
  • rule data 254 and segment data 256 may be automatically generated based on extracted data elements and profile data 252 , historical data, or other metadata.
  • Rule data 254 may include assessment of data quality according to a determined threshold, and may include data related to at least one of a null rule, range rule, uniqueness rule, valid value rule, format rule, conditional rule, or consistency rule. Rule data 254 may further include data related to a support value, a confidence value, and a lift ratio.
  • Segment data 256 may include data-defining generated clusters based on detection of defects in extracted data elements.
  • Data may be transformed using a general cluster computing system such as SparkTM and data may be extracted according to utilities, including third-party platforms.
  • SparkTM general cluster computing system
  • utilities including third-party platforms.
  • Database 260 may include Microsoft SQLTM databases, SharePointTM databases, OracleTM databases, SybaseTM databases, or other relational databases.
  • Memory 240 and database 260 may be implemented using any volatile or non-volatile memory including, for example, magnetic, semiconductor, tape, optical, removable, non-removable, or any other types of storage devices or computer-readable mediums.
  • I/O interfaces 230 may include not only network interface devices, but also user interface devices, such as one or more keyboards, mouse devices, and GUis for interaction with personal devices 120 and/or by controller 130 .
  • GUis may include a touch screen where a monitoring user may use his fingers to provide input, or a screen that can detect the operation of a stylus.
  • GUis may also display a web browser for point-and-click input operation.
  • Network server 160 may provide profile data 252 , rule data 254 , and segment data 256 for use in a data quality management application (as further described with reference to FIGS. 4-9 ) that is displayed and executed on personal device 120 and/or controller 130 .
  • personal device 120 and/or controller 130 may transmit profile data 252 , rule data 254 , and segment data 256 to network server 160 , from network 150 through I/O device 230 , and may analyze such data to control and/or restrict variable and/or rule features and settings by modifying and configuring data quality management.
  • Network server 160 may store a copy of profile data 252 , rule data 254 , and segment data 256 , for example, in memory 240 , database 260 , or in any other database accessible to server 160 . . .
  • FIG. 3 is a schematic block diagram illustrating an exemplary controller 130 , used in the exemplary system of FIG. 1 .
  • controller 360 may be capable of communicating with a transceiver 310 , network 150 , cloud platform 190 , and personal devices 120 .
  • Transceiver 310 may be capable of receiving one or more data quality management instructions (further described with reference to FIGS. 4-9 ) from one or more personal devices 120 and/or cloud platform 190 over network 150 .
  • Transceiver 310 may be capable of transmitting profile data 352 from controller 360 to one or more personal devices 120 and/or cloud platform 190 over network 150 .
  • Controller 360 may transmit profile data 352 , rule data 354 , and segment data 356 . This information may be stored in memory 340 and/or database 362 .
  • Controller 360 may include one or more processors 320 , input/output 330 , controller programs 344 and operating system 346 . Controller 360 may function in a manner similar to network server 160 and may operate independently or cooperatively with network server 160 .
  • Controller 360 may be configured to receive data quality management instructions to control, send, and/or edit data quality management features and/or settings.
  • the user-adjustable data feature settings may include at least one of a threshold, rule, score, or test features. Other data features and/or settings are contemplated.
  • a user may monitor variable and data quality according to requests from one or more registered tracking users executing a data quality management application on personal device 120 and/or through a controller 130 .
  • Controller 360 may function to include input according to rule data 354 based on one or more rules.
  • a first set of applicable rules may automatically be generated based on extracted data elements and/or based on a plurality of data feature settings.
  • Rule data 354 may include a null rule 370 , range rule 372 , uniqueness rule 374 , valid value rule 376 , or format rule 378 .
  • a consistency and conditional rule (not shown) may also be contemplated.
  • Other data quality management features may be utilized based on the particular data being analyzed, as well as for a specific purpose for which the data is being analyzed.
  • FIG. 4 is a diagrammatic illustration of an exemplary graphical user interface used for monitoring and configuring variables related to data quality management.
  • GUI 400 may be displayed as part of a screen executed by an application on a personal device 120 .
  • Exemplary GUI 400 may include a “User Defined Rules Summary” or a “Statistical Rules Summary” to assess data quality including metrics and percentage assessments including, for example, “Overall Pass” and further divided in the rules categories of “Accuracy,” “Completeness,” and “Consistency.”
  • a “Table” may extract data from a particular “Database,” and a list of “Variables” may be displayed for further exploration and monitoring by a tracking user according to a selected “Table.” For example, in FIG. 4 the “Table” according to “PL_LSMTGN_ACCT_DLY” is selected, and variables such as “PMT_PDUE_CT” and “TOT_PROM_CNT” are displayed for analysis.
  • Variables may be organized and categorized according to “Variable Name,” “Type,” “#Tests,” “Score,” (for all the tests and “Score” for each category of rules), “AC (Accuracy),” and “CP(Completeness) and/or “CO (Consistency).”
  • the “results” may also be organized and categorized according to “Statistical Rule” categories including, for example, “FA (Frequency Anomaly),” “GS (General Statistics)” “OD (Outlier Detection), and “% MIS (Missing).” Each category may include a numerical value or ranking indicating an assessment of the variable. A score may be calculated to see how many tests a variable is failing, by what ‘severity’ it is failing the test, or a combination of these two, as well as the number of rows failing at least one test.
  • Sorting and filtering features may also be incorporated to help the user prioritize elements to analyze, for example, elements with worse data quality first, based on using a chosen metric in “Score.”
  • a “Variable Distribution” may also be displayed. These data feature settings may be adjustable and customized according to the display preferences of a monitoring user. For example, a display scale can be changed between linear or logarithmic for “Variable Distribution.”
  • User-adjustable data feature settings may be provided as a tab, a slider, a drop-down menu, or a toggle. Other user-adjustable data feature settings and variable displays may be contemplated.
  • FIG. 5 is a diagrammatic illustration of an exemplary graphical user interface 500 used for tracking results related to variable data quality.
  • GUI 500 may be displayed as part of a screen executed by an application on a personal device 120 .
  • GUI 500 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “PMT_PDUE.”
  • GUI 500 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable.
  • GUI 500 may allow a user to “Include” a summary of “All Rules” (or only user-defined or statistical rules) and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user.
  • “Results” are displayed indicating tracking results related to rule data quality. These results are displayed in line graph and tabular form.
  • the line graph may be representative of a summary of the data quality of the selected element.
  • the tabular form may be representative of a listing down of all applicable rules for a selected element.
  • User defined rules as well statistical rules may be displayed, and may illustrate rule performance in comparison with historical data. Other types of graphs and graphical forms of displaying data are contemplated.
  • “Accuracy,” “Completeness,” and “Consistency” can be assessed.
  • a “Change Threshold” setting allows a monitoring user to change the threshold visible on the dashboard. (Further, a threshold suggestion based on historical data profile may be pre-populated.) Historical data for past cycles may also be displayed as relating to a selected one or more rules that may be automatically generated. Additionally, “Decision Tree Analysis” and “Defects” buttons may be included as buttons in the bottom right of the screen to open a separate GUI for display (as further described with reference to FIG. 9 ).
  • FIG. 6 is a diagrammatic illustration of an exemplary graphical user interface 600 used for tracking details related to variable data quality.
  • GUI 600 may be displayed as part of a screen executed by an application on a personal device 120 .
  • GUI 600 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to a different variable “EXTENSN_CNT.”
  • GUI 600 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable.
  • GUI 600 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user.
  • “Details” are displayed indicating tracking details related to an element data profile. These results are displayed in a vertical bar graph and tabular form. Other types of graphs and graphical forms of displaying data are contemplated. For example, a “Box Plot” may be shown to display the data distribution over time.
  • FIG. 7 is a diagrammatic illustration of an exemplary graphical user interface 700 used for tracking trends related to variable data quality.
  • GUI 700 may be displayed as part of a screen executed by an application on a personal device 120 .
  • GUI 700 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “EXTNSN_CNT.”
  • GUI 700 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable.
  • GUI 700 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user.
  • “Trends” are displayed indicating tracking trends related to rule data quality. These results are displayed in a line graph form. Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 8 is a diagrammatic illustration of an exemplary graphical user interface 800 used for tracking decision tree analysis related to variable data quality.
  • GUI 800 may be displayed as part of a screen executed by an application on a personal device 120 .
  • GUI 800 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “EXTNSN_CNT.”
  • GUI 800 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable.
  • GUI 800 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.”
  • Tabs for “Analysis” and “Tree” can be selected by a monitoring user.
  • the tab for “Analysis” is displayed indicating analysis related to rule data quality.
  • This “Analysis” is displayed in a horizontal bar graph and corresponds to a “Positive Outlier” test of multiple data nodes. All clusters (segments) derived from the decision tree analysis may be displayed.
  • a bar graph may be sorted according to the cluster having the most “defects’ or the most observations.
  • a “Node Definition” may be the definition of the segment.
  • GUI 800 may also display a contiguous panel of “Variable Importance” assessing and displaying in percentage and graphical terms the “Top 5 Variables.”
  • the “Top 5 Variables” are the variables identified as important in defining multivariate clusters. Display of this information may aid users in performing further analysis).
  • Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 9 is a diagrammatic illustration of an exemplary graphical user interface 900 used for tracking a decision tree related to variable data quality.
  • GUI 900 may be displayed as part of a screen executed by an application on a personal device 120 .
  • GUI 900 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics,” related to variable “EXTNSN_CNT.”
  • GUI 900 may include drop down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable.
  • GUI 900 may allow to “Include” a summary of “All Rules” and may provide a “Score.”
  • Tabs for “Analysis” and “Tree” can be selected by a monitoring user.
  • the tab for “Tree” is displayed indicating a decision tree analysis related to rule data quality.
  • This “Tree” is displayed as a segmentation tree and corresponds to a “Positive Outlier” test of multiple data nodes.
  • a “Node Definition” may be displayed for understanding how the segment is defined.
  • Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 10 is a flow chart illustrating an exemplary process 1000 that one or more processors may perform in accordance with the disclosed embodiments. While process 1000 is described herein as a series of steps, it is to be understood that the order of the steps may vary in other implementations. In particular, steps may be performed in any order, or in parallel.
  • process 1000 may include extracting a plurality of data elements from a data source.
  • Data elements may be pulled from a data sources automatically or may be uploaded from an existing software program.
  • Process 1000 may include parsing a plurality of data elements according to an alphanumeric identifier or a data list from the data source.
  • Data may include historical data and/or metadata.
  • Data sources may include TeradataTM, SASTM, SQL ServerTM, OracleTM or other sources of data including cloud-based file systems such as AmazonTM Simple Storage Service (AmazonTM S3). Other extracting mechanisms and/or data sources are contemplated.
  • process 1000 may include generating a representative data profile based on the extracted data elements.
  • the generating of profile data 252 may be based on historical data and metadata, and according to an algorithm or server programs 244 .
  • the generating may be executed automatically according to network server 160 or processor 220 interaction with cloud platform 190 .
  • a preliminary analysis may also be conducted by a monitoring user according to descriptive statistics or displayed data in order to provide user adjustable input and/or features to provide additional instructions to establish a representative data profile.
  • process 1000 may automatically create a first set of applicable rules based on the extracted data elements and the representative data profile, wherein the first set of applicable rules assesses data quality according to a determined threshold.
  • the threshold may be determined by a monitoring user as illustrated in FIG. 5 .
  • a threshold may be determined automatically based on previously extracted data elements and representative data profile generated based on the previously extracted data elements.
  • the threshold may define criteria for acceptable or unacceptable data quality.
  • a machine learning algorithm may be implemented in order to suggest a set of rules using parts of data element names as input.
  • the algorithm may learn from data (and previously defined rules) describing the relationship between element names and existing rules to predict rules for new data elements. Rules may also be mapped where rules exist together.
  • a representative data profile of an element may be used to refine a rule list. The representative data profile may check applicability of rule and may suggest more rules. For example, the data profile may conduct a format check to determine whether a ZIP code adheres to a custom format of a 5-, 9-, or 11-digit integer.
  • Processor 220 may be configured to automatically create additional rules for additional extracted data elements.
  • Processor 220 may also be further configured to define a conditional rule by identifying subpopulations, historical data, and contingent relationships between the extracted data elements. Codes may be automatically generated for rules. Additionally, as users accept, reject, or adjust a rule as mentioned, that decision may be considered as an input and utilized in further refining the rules suggestion or second set of suggest rules.
  • a rule may comprise at least one of a support value, a confidence value, or a lift ratio.
  • the support value of a rule may be defined as the proportion of transactions in the database which contains the item-set.
  • the confidence value of a rule, X-----+Y may be the proportion of transactions that contain X which also contains Y.
  • the lift ratio of the observed support may be that expected if X and Y were independent.
  • Minimum support, confidence, and lift values may be predetermined.
  • a rule may also comprise at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule.
  • a null rule may be contemplated for all variables, and a null rule check may be suggested, when calculated by the data quality management system/tool, if an Upper limit (UL) for data is lower than 1% for a % of missing data.
  • a range rule may apply to continuous variables, and a Range may be defined as a Lower limit of P0.5 (LL(P0.5)) to Upper limit of P99.5 (UL(P99.5)).
  • a uniqueness rule may apply to all variables except time-stamp, decimal, and descriptive variables, and uniqueness rule may be proposed if 98% of the values in the variable are unique.
  • a valid value rule may apply to categorical variables, and allowed valid values may be determined.
  • a format rule may apply for all variables, and may assess a quality of the format of data.
  • process 1000 may include assessing newly extracted data elements based on a comparison of the newly extracted data elements to the second set of suggested rules.
  • Rules may be generated automatically and also may be monitored by users prior to their application to additional data sets. Once approved by a user, new rules may be automatically applied to extracted data on a periodic basis and may indicate based on a determined threshold whether a data element is of acceptable quality for use by an organization, and whether or not it passes or fails.
  • the comparison may include an identification of data defects determined by processor 220 or network server 160 . Further, user actions may also operate as an input to the rule suggestion framework and may further refines rule suggestion.
  • the process may further include clustering the assessed data elements into multiple segments according to detected defects detected in the assessed data elements. Based on the rules application, defects may be triggered.
  • Identification of data defects may be collated, and a data quality score may be assigned to an element, whether or not the data possesses any defects (as further referenced in FIGS. 5-9 ). Based on an assigned score or ranking, a data element may be classified or categorized as either possessing failing or passing data quality. Based on this classification, data elements may be clustered or segments. Clustering of data may also include more than two classifications.
  • process 1000 may analyze data quality corresponding to defects detected in the assessed data elements.
  • a decision tree algorithm from a comprehensive platform or from a third-party tool may be used to find pockets of defect concentration (as illustrated in FIG. 9 ).
  • a defect may be defined as an event, and a population of data may be divided into multivariate subpopulations based on an event rate or concentration of defects.
  • the data may be isolated and analyzed according to detected defects. Curated settings adjusted by a user may be used to identify nodes with a large concentration of defects (as illustrated in FIG. 9 ).
  • a user can select the type of defect on which he or she wants to do further analysis, and can generate the list of defects on a click of a button. Based on identification and analysis of data, data can be further clustered, segmented, and used to provide an overall assessment of data health. Additionally, data with defects can be disregarded.
  • process 1000 may deliver an output of the data quality to a personal device associated for display and data monitoring by a user.
  • a summary dashboard may be generated (as referenced in FIGS. 4-9 ) to display data quality results on an ongoing basis.
  • the dashboard may be displayed as part of a web browser, and may be displayed as part of an application executed on a personal device 120 .
  • Results, details, and trends may be displayed in a graphical user interface. Additionally, historical results may be displayed. Monitoring of existing data may be compared with historical runs. An element's summary statistics may also be displayed to help in monitoring any sudden change in the data quality metrics of one or more data elements.
  • the dashboard may integrate data quality results of multiple data tables and datasets into one singular view and summarize “Data Health” for a group of datasets at an LOB level or at a Database level.
  • Data Health may be defined as percentage of elements passing all the tests by all the variables at a particular grouping level.
  • FIG. 11 is a flow chart illustrating another exemplary process 1100 that one or more processors may perform in accordance with the disclosed embodiments. While the exemplary process 1100 is described herein as a series of steps, it is to be understood that the order of the steps may vary in other implementations. In particular, steps may be performed in any order, or in parallel.
  • process 1100 may include providing a plurality of user-adjustable data feature settings to a user.
  • the user-adjustable data feature settings may be provided as a tab, a slider, a drop-down menu, or a toggle.
  • the user-adjustable data feature settings may include at least one of a variable, a threshold, a rule, a score, or a test.
  • a graphical user interface (GUI) as described may include a plurality of user-adjustable data feature settings.
  • Other user-adjustable data feature settings may be contemplated.
  • process 1100 may include detecting a user interaction modifying the user-adjustable data feature settings.
  • a detection may include one or more pressure sensors detecting a user's finger pressing on a user interface panel of a personal device 120 .
  • a detection may also include processor 220 detecting a change in a selection of the
  • user-adjustable data feature settings For example, a change in selection of a drop-down menu or slider may indicate that user-adjustable features have been modified according to a variable and/or rule.
  • Other detection mechanisms may be contemplated.
  • process 1100 may include determining customized data feature settings based on historical user interaction.
  • GUI data 256 may be stored as representative of customized user preferences for monitoring users.
  • GUI data 256 may be associated with profile data 252 and rule data 254 .
  • GUI data 256 may be customized according to user, profile, rule, variable, cluster defect segmentation, and/or additional categories. Capturing of user selection/customization of rules over time to may also allow for better organizational recommendations. Other customized data feature settings may be contemplated.
  • process 1100 may include automatically creating a first set of applicable rules based on the customized data feature settings. Codes may be automatically generated for rules. Automatic creation of the first set of applicable rules may be associated with representative data profile, historical data, and metadata, and processors 220 may be configured to store the representative data profile, historical data, and metadata in a cloud network. An automatically created first set of applicable rules rule may assess data quality according to a determined threshold. The threshold may be determined by a monitoring user as illustrated in FIG. 5 . Alternatively, a threshold may be determined automatically based a previously extracted data elements and the representative data profile. The threshold may define criteria for acceptable or unacceptable data quality.
  • Processor 220 may be configured to define a conditional rule by identifying subpopulations, historical data, and contingent relationships between the extracted data elements.
  • a rule may also comprise at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a conditional rule, or a consistency rule. Other rule types and/or formats may be contemplated.
  • Programs based on the written description and disclosed methods are within the skill of an experienced developer.
  • Various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software.
  • program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system for providing data quality management may include a processor configured to execute instructions to: extract a plurality of first data elements from a data source; generate a data profile based on the first data elements; automatically create a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generate a second set of rules based on the first data elements and the first set of rules; extract a plurality of second data elements; assess the second data elements based on a comparison of the second data elements to the second set of rules; detect defects based on the comparison; analyze data quality according to the detected defects; and transmit signals representing the data quality analysis to a client device for display to a user.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 16/252,024, filed on Jan. 18, 2019, which is a continuation of U.S. patent application Ser. No. 15/847,674, filed on Dec. 19, 2017, now U.S. Pat. No. 10,185,728, issued on Jan. 22, 2019, which claims benefit of U.S. Provisional Patent Application No. 62/436,258, filed on Dec. 19, 2016. The disclosures of which are hereby incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • The present disclosure generally relates to providing improved data quality management solutions for organizations, and more specifically, to developing automated and contingent rule-making processes to assess data quality for use in organizational decision making.
  • BACKGROUND
  • An organization may wish to collect, store, monitor, and analyze data. As the amount of data grows, data amalgamation mechanisms are increasingly relied upon in order to form organizational and corporate strategic decisions. While current data assembly mechanisms may allow for collection of raw data, there exist some shortcomings. For example, collected raw data may not be readily usable, and may need to be modified or summarized prior to analysis or synthesis. Moreover, collected data may constitute low quality data, and errors may occur during collection and/or transformation of data. For example, errors related to data quality may include incomplete data (where data has not been pulled from one source), incorrect transformations to a data element, wrong manual entry of the data, and errors in calculations. Further, at present it is difficult to assess data quality for a large dataset and focus is usually on individual tables of data. With such errors, difficulties inassessment, and without such transformation, premature use of data for analysis may cause poor organizational decision-making resulting in significant monetary costs and reputational damage to a company.
  • Furthermore, while current data quality tools allow for processing of significant amounts of data, existing tools may not provide comprehensive means for data exploration and monitoring. For example, some data platforms may be used solely to execute processes related to collection at the individual dataset level, while other tools may be used solely to execute processes related to data tracking. Additionally, many data quality tools require manual implementation, which may be tiresome and burdensome to operate since there exists no standardized procedure. Use of disaggregated and manual data collection mechanisms across multiple platforms may also result in tedious or erroneous data analysis. Furthermore, where data monitoring for a large number of variables is required, use of existing data quality tools may require significant human capital over a long period of time and at significant cost to an organization.
  • Accordingly, it may be desirable to provide a standard data quality process or rule-based workflow implementable within a singular platform. This process may significantly distinguish from or improve over a manual process. For example, where an example variable is an annual percentage rate (APR), manual rules may check whether the contract APR is a number and whether it is greater than zero. However, the enhanced process described herein may include the creation of suggested rules, including not only a rule to check whether contract APR is a number greater than zero, but also including a rule indicating that the contract APR should not be missing if the contract is finalized. This rule provides a further check, thus improving data quality over that of manual rule implementation. This provides an improvement in data quality and an efficiency gain.
  • Additionally, an automated end-to-end data application may be preferable in order to allow for streamlined data procurement, analysis using consistent metrics, and monitoring. Moreover, there exists a need for a user-intuitive point-and-click interface allowing for rapid and efficient monitoring and exploration of significant quantities of data sets and elements. Further, there exists a need for a comprehensive data tool which allows a user to perform diagnosis of data quality directly within the data tool. Current platforms are inefficient, difficult, or even impossible, thus requiring excess operator time and processing resources.
  • Further, typical processes for managing data quality are subjective and not automated. Such processes are time- and resource-consuming. Therefore, it is desirable to implement a distinctly computer-implemented and enhanced automated process which improves the management of data quality.
  • The present disclosure is directed at addressing one or more of the shortcomings set forth above and/or other problems of existing hardware systems.
  • SUMMARY
  • One aspect of the present disclosure is directed to a system for providing data quality management. The system may include a memory storing instructions, and a processor connected to a network. The processor is configured to execute the instructions to: extract a plurality of first data elements from a data source; generate a data profile based on the first data elements; automatically create a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generate a second set of rules based on the first data elements and the first set of rules; extract a plurality of second data elements; assess the second data elements based on a comparison of the second data elements to the second set of rules; detect defects based on the comparison; analyze data quality according to the detected defects; and transmit signals representing the data quality analysis to a client device for display to a user.
  • Another aspect of the present disclosure is directed to a method for providing data quality management. The method may be performed by a processor, and may include: extracting a plurality of first data elements from a data source; generating a data profile based on the first data elements; automatically creating a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generating a second set of rules based on the first data elements and the first set of rules; extracting a plurality of second data elements; assessing the second data elements based on a comparison of the second data elements to the second set of rules; detecting defects based on the comparison; analyzing data quality according to the detected defects;
  • and transmitting signals representing the data quality analysis to a client device for display to a user.
  • Another aspect of the present disclosure is directed to a non-transitory computer-readable medium for providing data quality management. The non-transitory computer-readable medium may store instructions executable by one or more processors to perform a method. The method may include: extracting a plurality of first data elements from a data source; generating a data profile based on the first data elements; automatically creating a first set of rules based on the first data elements and the data profile, the first set of rules assessing data quality according to a threshold; generating a second set of rules based on the first data elements and the first set of rules; extracting a plurality of second data elements; assessing the second data elements based on a comparison of the second data elements to the second set of rules; detecting defects based on the comparison; analyzing data quality according to the detected defects; and transmitting signals representing the data quality analysis to a client device for display to a user.
  • Another aspect of the present disclosure is directed to a system for providing data quality management. The system includes a graphic user interface (GUI) configured to: display a plurality of user-adjustable data feature settings to a user; and detect a first user interaction modifying the data feature settings. The system also includes a processor coupled to the GUI and configured to: extract a plurality of first data elements from a data source; generate a data profile based on the first data elements; determine first data feature settings based on the first user interaction; and create a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • Another aspect of the present disclosure is directed to a method for providing data quality management. The method includes: displaying, on a graphic user interface (GUI), a plurality of user-adjustable data feature settings to a user; detecting, from the GUI, a first user interaction modifying the data feature settings; extracting, by a processor coupled to the GUI, a plurality of first data elements from a data source; generating, by the processor, a data profile based on the first data elements; determining, by the processor, first data feature settings based on the first user interaction; and creating, by the processor, a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • Yet another aspect of the present disclosure is directed to a non-transitory computer-readable medium storing instructions executable by a processor to provide a graphic user interface (GUI) and perform a method for providing data quality management. The method comprising: displaying, on the GUI, a plurality of user-adjustable data feature settings to a user; detecting, from the GUI, a first user interaction modifying the data feature settings; extracting, by a processor coupled to the GUI, a plurality of first data elements from a data source; generating, by the processor, a data profile based on the first data elements; determining, by the processor, first data feature settings based on the first user interaction; and creating, by the processor, a first set of rules based on the first data feature settings, the first set of rules assessing quality of the first data elements according to a threshold.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic block diagram illustrating an exemplary system for providing data quality management, consistent with the disclosed embodiments;
  • FIG. 2 is a schematic block diagram illustrating an exemplary network server, used in the system of FIG. 1;
  • FIG. 3 is a schematic block diagram illustrating an exemplary controller, used in the system of FIG. 1;
  • FIG. 4 is a diagrammatic illustration of an exemplary graphical user interface used for monitoring and configuring variables related to data quality management;
  • FIG. 5 is a diagrammatic illustration of an exemplary graphical user interface used for tracking results related to variable data quality;
  • FIG. 6 is a diagrammatic illustration of an exemplary graphical user interface used for tracking details related to variable data quality;
  • FIG. 7 is a diagrammatic illustration of an exemplary graphical user interface used for tracking trends related to variable data quality;
  • FIG. 8 is a diagrammatic illustration of an exemplary graphical user interface used for tracking decision analysis related to variable data quality;
  • FIG. 9 is a diagrammatic illustration of an exemplary graphical user interface used for tracking a decision tree related to variable data quality;
  • FIG. 10 is a flow chart illustrating an exemplary process performed by the system in FIG. 1, consistent with the disclosed embodiments; and
  • FIG. 11 is a flow chart illustrating another exemplary process performed by the system in FIG. 1, in accordance with the disclosed embodiments.
  • DETAILED DESCRIPTION
  • The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and in the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions, or modifications may be made to the components and steps illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope of the invention is defined by the appended claims.
  • FIG. 1 is a schematic block diagram illustrating an exemplary embodiment for providing data quality management, consistent with the disclosed embodiments. As illustrated in FIG. 1, system 100 may include one or more personal devices and/or user equipment 120, a controller 130, and a network 150.
  • Personal devices 120 may include personal computing devices such as, for example, desktop computers, notebook computers, mobile devices, tablets, smartphones, wearable devices such as smart watch, smart bracelet, and Google Glass˜[“\ and any other personal devices. Personal devices 120 may communicate with other parts of system 100 through network 150. Personal devices 120 may also include software and executable programs configured to communicate with network 150 and customize data quality management for one or more users monitoring and configuring data quality. Other software and executable programs are contemplated.
  • System 100 may allow for one or more personal devices 120 and/or controllers 130 to transfer representative data profiles, historical data, metadata, and customized user-adjustable data quality management features associated with a data quality application (e.g., illustrated in FIGS. 4-9) over network 150 to a cloud platform 190 and/or controller 130.
  • System 100 may include mobile or stationary (not shown) personal devices 120 located in residential premises and non-residential premises configured to communicate with network
    150. Personal devices 120 and/or controller 130 may connect to network 150 by Wi-Fi or wireless access points (WAP). Bluetooth® or similar wireless technology may be contemplated. Network 150 may include a wireless network, such as a cellular network, a satellite network, the Internet, or a combination of these (or other) networks that are used to transport data. Furthermore, network 150 may be a wired network, such as an Ethernet network. Network 150 may transmit, for example, authentication services that enable personal devices 120 and/or controller 130 to access information, and may transmit data quality management instructions according to a representative profile data, created rule data, segment or cluster data, and associated metadata.
  • In exemplary system 100, personal devices 120 and controller 130 may communicate with one or more servers in cloud platform 190 through network 150. Cloud platform 190 may comprise one or more network servers 160, third party servers 170, and/or databases 180. Servers 160 and 170 may provide cloud services for users and their personal devices 120 and/or controller 130. For example, a cloud-based architecture may be implemented comprising a distributed portion that executes at another location in network 150 and a corresponding cloud portion that executes on a network server 160 in cloud platform 190. Servers in cloud platform 190 may also communicate with a transceiver of controller 130 over network 150 using appropriate cloud-based communication protocols, such as Simple Object Access Protocol (SOAP) or Representational State Transfer (REST) and/or other protocols that would be known to those skilled in the art. Such communication may allow for remote control of data quality management operations of controller 130 by, for example, by identifying representative data profiles and data quality preferences associated with the identified data profiles. Such communication may also allow for remote control of data quality management monitoring operations by, for example, a user operating a GUI on a data quality management application executed on a personal device 120 and/or on controller 130 to configure user-adjustable data feature settings or monitor related variables.
  • As shown in FIG. 1, network 150 may be accessible to network servers 160, third party servers 170, and databases 180 in cloud platform 190, for sending and receiving information, such as profile data, rule data, and segment data, within system 100. Network server 160, third party server 170, and database 180 may include network, cloud, and/or backup services. For example, in some embodiments, network server 160 may include a cloud computing service such as Microsoft Azure™ or Amazon Web Services′[\l_ Additional cloud-based wireless access solutions compatible with LTE (e.g., using the 3.5 GHz spectrum in the US) are contemplated. In some embodiments, third party server 170 may include a messaging or notification service, for example, that may notify or alert a monitoring user of at least one rule update through the cloud network. A selected rule from a set of applicable rules may be updated and may include at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule, but other rule types are contemplated. A conditional rule (if A, then B) may flag accounts as defective if a variable is missing as the variable cannot be missing if a certain condition is fulfilled. For example, where a variable is “Contract APR” and where a fulfilled condition is exemplified by an “offer type code” equaling a “finalized contract,” the variable of “Contract APR” cannot be designated as missing upon the “offer type code” equaling a “finalized contract.” A consistency rule may include a mathematical or correlational check (e.g. A=B+C where A, B, and C are variables, or when A increases B also increases). A more complete description of“rules” will be set forth below.
  • FIG. 2 is a schematic block diagram illustrating an exemplary network server 160, used in the exemplary system 100 of FIG. 1. It is contemplated that one or more personal devices 120 may include similar structures described in connection with network server 160. As shown in FIG. 2, network server 160 may include, among other things, a processor 220, personal/output (I/O) devices 230, a memory 240, and a database 260, each coupled to one or more interconnected internal buses (not shown). Memory 240 may store among other things, server programs 244 and an operating system 246. Server programs 244 may be executed by cloud-based architecture or, alternatively, by a separate software program, such as a data quality management application (as further described with reference to FIGS. 4-9) for execution on personal device 120 and/or controller 130. Software program 244 may be located in personal devices 120, or in alternative embodiments, in a controller 130 (as described with reference to FIG. 3). Software program 244 may configure remote control and update of user-adjustable data feature settings according to existing profile data, rule data, and segment data.
  • Memory 240 and/or database 260 may store profile data 252 based on individual and/or aggregate data profile behavior. Profile data 252 may be input directly or manually by a user into a data quality management application that is executed on a personal device 120 and/or by a controller 130. Profile data 252 may also be automatically generated based on extracted data elements. Memory 240 may also store other data and programs. Profile data 252 may include representative data profiles related to organizational information including, for example, affiliated user login and/or other registration identification (ID) or user credentials, authentication timestamp information, network node or access point location(s) and/or preferences, and other metadata generated by algorithms in server programs 244.
  • Other data critical to an organization's mission is also contemplated as profile data 252. Memory 240 and/or database 260 may also store rule data 254 and segment data 256. Rule data 254 and segment data 256 may be directly and manually input to a data quality management application that is executed on a personal device 120 and/or controller 130.
  • Alternatively, rule data 254 and segment data 256 may be automatically generated based on extracted data elements and profile data 252, historical data, or other metadata. Rule data 254 may include assessment of data quality according to a determined threshold, and may include data related to at least one of a null rule, range rule, uniqueness rule, valid value rule, format rule, conditional rule, or consistency rule. Rule data 254 may further include data related to a support value, a confidence value, and a lift ratio. Segment data 256 may include data-defining generated clusters based on detection of defects in extracted data elements.
  • Data may be transformed using a general cluster computing system such as Spark™ and data may be extracted according to utilities, including third-party platforms.
  • Database 260 may include Microsoft SQL™ databases, SharePoint™ databases, Oracle™ databases, Sybase™ databases, or other relational databases. Memory 240 and database 260 may be implemented using any volatile or non-volatile memory including, for example, magnetic, semiconductor, tape, optical, removable, non-removable, or any other types of storage devices or computer-readable mediums.
  • I/O interfaces 230 may include not only network interface devices, but also user interface devices, such as one or more keyboards, mouse devices, and GUis for interaction with personal devices 120 and/or by controller 130. For example, GUis may include a touch screen where a monitoring user may use his fingers to provide input, or a screen that can detect the operation of a stylus. GUis may also display a web browser for point-and-click input operation. Network server 160 may provide profile data 252, rule data 254, and segment data 256 for use in a data quality management application (as further described with reference to FIGS. 4-9) that is displayed and executed on personal device 120 and/or controller 130. Based on user input or user interaction with the GUI, personal device 120 and/or controller 130 may transmit profile data 252, rule data 254, and segment data 256 to network server 160, from network 150 through I/O device 230, and may analyze such data to control and/or restrict variable and/or rule features and settings by modifying and configuring data quality management. Network server 160 may store a copy of profile data 252, rule data 254, and segment data 256, for example, in memory 240, database 260, or in any other database accessible to server 160 . . . FIG. 3 is a schematic block diagram illustrating an exemplary controller 130, used in the exemplary system of FIG. 1.
  • As illustrated in FIG. 3, controller 360 may be capable of communicating with a transceiver 310, network 150, cloud platform 190, and personal devices 120. Transceiver 310 may be capable of receiving one or more data quality management instructions (further described with reference to FIGS. 4-9) from one or more personal devices 120 and/or cloud platform 190 over network 150. Transceiver 310 may be capable of transmitting profile data 352 from controller 360 to one or more personal devices 120 and/or cloud platform 190 over network 150. Controller 360 may transmit profile data 352, rule data 354, and segment data 356. This information may be stored in memory 340 and/or database 362. Controller 360 may include one or more processors 320, input/output 330, controller programs 344 and operating system 346. Controller 360 may function in a manner similar to network server 160 and may operate independently or cooperatively with network server 160.
  • Controller 360 may be configured to receive data quality management instructions to control, send, and/or edit data quality management features and/or settings. The user-adjustable data feature settings may include at least one of a threshold, rule, score, or test features. Other data features and/or settings are contemplated. A user may monitor variable and data quality according to requests from one or more registered tracking users executing a data quality management application on personal device 120 and/or through a controller 130.
  • Controller 360 may function to include input according to rule data 354 based on one or more rules. A first set of applicable rules may automatically be generated based on extracted data elements and/or based on a plurality of data feature settings. Rule data 354 may include a null rule 370, range rule 372, uniqueness rule 374, valid value rule 376, or format rule 378. A consistency and conditional rule (not shown) may also be contemplated. Other data quality management features may be utilized based on the particular data being analyzed, as well as for a specific purpose for which the data is being analyzed.
  • FIG. 4 is a diagrammatic illustration of an exemplary graphical user interface used for monitoring and configuring variables related to data quality management. GUI 400 may be displayed as part of a screen executed by an application on a personal device 120.
  • Exemplary GUI 400 may include a “User Defined Rules Summary” or a “Statistical Rules Summary” to assess data quality including metrics and percentage assessments including, for example, “Overall Pass” and further divided in the rules categories of “Accuracy,” “Completeness,” and “Consistency.” A “Table” may extract data from a particular “Database,” and a list of “Variables” may be displayed for further exploration and monitoring by a tracking user according to a selected “Table.” For example, in FIG. 4 the “Table” according to “PL_LSMTGN_ACCT_DLY” is selected, and variables such as “PMT_PDUE_CT” and “TOT_PROM_CNT” are displayed for analysis. The “Variables” may be organized and categorized according to “Variable Name,” “Type,” “#Tests,” “Score,” (for all the tests and “Score” for each category of rules), “AC (Accuracy),” and “CP(Completeness) and/or “CO (Consistency).”
  • The “results” may also be organized and categorized according to “Statistical Rule” categories including, for example, “FA (Frequency Anomaly),” “GS (General Statistics)” “OD (Outlier Detection), and “% MIS (Missing).” Each category may include a numerical value or ranking indicating an assessment of the variable. A score may be calculated to see how many tests a variable is failing, by what ‘severity’ it is failing the test, or a combination of these two, as well as the number of rows failing at least one test. Sorting and filtering features may also be incorporated to help the user prioritize elements to analyze, for example, elements with worse data quality first, based on using a chosen metric in “Score.” A “Variable Distribution” may also be displayed. These data feature settings may be adjustable and customized according to the display preferences of a monitoring user. For example, a display scale can be changed between linear or logarithmic for “Variable Distribution.”
  • User-adjustable data feature settings may be provided as a tab, a slider, a drop-down menu, or a toggle. Other user-adjustable data feature settings and variable displays may be contemplated.
  • FIG. 5 is a diagrammatic illustration of an exemplary graphical user interface 500 used for tracking results related to variable data quality. GUI 500 may be displayed as part of a screen executed by an application on a personal device 120. GUI 500 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “PMT_PDUE.” GUI 500 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable. For example, GUI 500 may allow a user to “Include” a summary of “All Rules” (or only user-defined or statistical rules) and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user. In FIG. 5, “Results” are displayed indicating tracking results related to rule data quality. These results are displayed in line graph and tabular form. The line graph may be representative of a summary of the data quality of the selected element. The tabular form may be representative of a listing down of all applicable rules for a selected element. User defined rules as well statistical rules may be displayed, and may illustrate rule performance in comparison with historical data. Other types of graphs and graphical forms of displaying data are contemplated. Also, “Accuracy,” “Completeness,” and “Consistency” can be assessed. A “Change Threshold” setting allows a monitoring user to change the threshold visible on the dashboard. (Further, a threshold suggestion based on historical data profile may be pre-populated.) Historical data for past cycles may also be displayed as relating to a selected one or more rules that may be automatically generated. Additionally, “Decision Tree Analysis” and “Defects” buttons may be included as buttons in the bottom right of the screen to open a separate GUI for display (as further described with reference to FIG. 9).
  • FIG. 6 is a diagrammatic illustration of an exemplary graphical user interface 600 used for tracking details related to variable data quality. GUI 600 may be displayed as part of a screen executed by an application on a personal device 120. GUI 600 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to a different variable “EXTENSN_CNT.” GUI 600 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable. For example, GUI 600 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user. In FIG. 6, “Details” are displayed indicating tracking details related to an element data profile. These results are displayed in a vertical bar graph and tabular form. Other types of graphs and graphical forms of displaying data are contemplated. For example, a “Box Plot” may be shown to display the data distribution over time.
  • FIG. 7 is a diagrammatic illustration of an exemplary graphical user interface 700 used for tracking trends related to variable data quality. GUI 700 may be displayed as part of a screen executed by an application on a personal device 120. GUI 700 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “EXTNSN_CNT.” GUI 700 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable. For example, GUI 700 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Results,” “Details,” and “Trends” can be selected by a monitoring user. In FIG. 7, “Trends” are displayed indicating tracking trends related to rule data quality. These results are displayed in a line graph form. Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 8 is a diagrammatic illustration of an exemplary graphical user interface 800 used for tracking decision tree analysis related to variable data quality. GUI 800 may be displayed as part of a screen executed by an application on a personal device 120. GUI 800 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics” related to variable “EXTNSN_CNT.” GUI 800 may include drop-down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable. For example, GUI 800 may allow a user to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Analysis” and “Tree” can be selected by a monitoring user. In FIG. 8, the tab for “Analysis” is displayed indicating analysis related to rule data quality. This “Analysis” is displayed in a horizontal bar graph and corresponds to a “Positive Outlier” test of multiple data nodes. All clusters (segments) derived from the decision tree analysis may be displayed. A bar graph may be sorted according to the cluster having the most “defects’ or the most observations. A “Node Definition” may be the definition of the segment. Further, a “Node Definition” may be displayed for a monitoring user. GUI 800 may also display a contiguous panel of “Variable Importance” assessing and displaying in percentage and graphical terms the “Top 5 Variables.” The “Top 5 Variables” are the variables identified as important in defining multivariate clusters. Display of this information may aid users in performing further analysis). Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 9 is a diagrammatic illustration of an exemplary graphical user interface 900 used for tracking a decision tree related to variable data quality. GUI 900 may be displayed as part of a screen executed by an application on a personal device 120. GUI 900 may include a “Statistical Rules Summary” including an assessment of “Frequency Anomaly,” “Outlier Detection,” “Missing,” and “General Statistics,” related to variable “EXTNSN_CNT.” GUI 900 may include drop down menus and other input fields (not shown) to analyze and/or summarize a selection of one or all rules related to this variable. For example, GUI 900 may allow to “Include” a summary of “All Rules” and may provide a “Score.” Tabs for “Analysis” and “Tree” can be selected by a monitoring user. In FIG. 9, the tab for “Tree” is displayed indicating a decision tree analysis related to rule data quality. This “Tree” is displayed as a segmentation tree and corresponds to a “Positive Outlier” test of multiple data nodes. A “Node Definition” may be displayed for understanding how the segment is defined. Other types of graphs and graphical forms of displaying data are contemplated.
  • FIG. 10 is a flow chart illustrating an exemplary process 1000 that one or more processors may perform in accordance with the disclosed embodiments. While process 1000 is described herein as a series of steps, it is to be understood that the order of the steps may vary in other implementations. In particular, steps may be performed in any order, or in parallel.
  • At step 1002, process 1000 may include extracting a plurality of data elements from a data source. Data elements may be pulled from a data sources automatically or may be uploaded from an existing software program. Process 1000 may include parsing a plurality of data elements according to an alphanumeric identifier or a data list from the data source.
  • Extraction may occur at an individual data set level, and may also occur at broader levels for large object (LOB) data. Data may include historical data and/or metadata. Data sources may include Teradata™, SAS™, SQL Server™, Oracle™ or other sources of data including cloud-based file systems such as Amazon™ Simple Storage Service (Amazon™ S3). Other extracting mechanisms and/or data sources are contemplated.
  • At step 1004, process 1000 may include generating a representative data profile based on the extracted data elements. The generating of profile data 252 may be based on historical data and metadata, and according to an algorithm or server programs 244. The generating may be executed automatically according to network server 160 or processor 220 interaction with cloud platform 190. A preliminary analysis may also be conducted by a monitoring user according to descriptive statistics or displayed data in order to provide user adjustable input and/or features to provide additional instructions to establish a representative data profile.
  • At step 1006, process 1000 may automatically create a first set of applicable rules based on the extracted data elements and the representative data profile, wherein the first set of applicable rules assesses data quality according to a determined threshold. The threshold may be determined by a monitoring user as illustrated in FIG. 5. Alternatively, a threshold may be determined automatically based on previously extracted data elements and representative data profile generated based on the previously extracted data elements. The threshold may define criteria for acceptable or unacceptable data quality.
  • A machine learning algorithm may be implemented in order to suggest a set of rules using parts of data element names as input. The algorithm may learn from data (and previously defined rules) describing the relationship between element names and existing rules to predict rules for new data elements. Rules may also be mapped where rules exist together. A representative data profile of an element may be used to refine a rule list. The representative data profile may check applicability of rule and may suggest more rules. For example, the data profile may conduct a format check to determine whether a ZIP code adheres to a custom format of a 5-, 9-, or 11-digit integer. Processor 220 may be configured to automatically create additional rules for additional extracted data elements. Processor 220 may also be further configured to define a conditional rule by identifying subpopulations, historical data, and contingent relationships between the extracted data elements. Codes may be automatically generated for rules. Additionally, as users accept, reject, or adjust a rule as mentioned, that decision may be considered as an input and utilized in further refining the rules suggestion or second set of suggest rules.
  • A rule may comprise at least one of a support value, a confidence value, or a lift ratio. The support value of a rule may be defined as the proportion of transactions in the database which contains the item-set. The confidence value of a rule, X-----+Y may be the proportion of transactions that contain X which also contains Y. The lift ratio of the observed support may be that expected if X and Y were independent. Minimum support, confidence, and lift values may be predetermined. A rule may also comprise at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule. A null rule may be contemplated for all variables, and a null rule check may be suggested, when calculated by the data quality management system/tool, if an Upper limit (UL) for data is lower than 1% for a % of missing data. A range rule may apply to continuous variables, and a Range may be defined as a Lower limit of P0.5 (LL(P0.5)) to Upper limit of P99.5 (UL(P99.5)). A uniqueness rule may apply to all variables except time-stamp, decimal, and descriptive variables, and uniqueness rule may be proposed if 98% of the values in the variable are unique. A valid value rule may apply to categorical variables, and allowed valid values may be determined. This method may look at existing values in the categorical variable and the percentage of elements with each value to determine whether valid value rules should be suggested. A format rule may apply for all variables, and may assess a quality of the format of data. Other rule types and/or formats may be contemplated. For example, a conditional rule (if A then B) may flag accounts as defective if a variable is missing, and a consistency rule may include a mathematical or correlational check (e.g. A=B+C where A, B, and Care variables). Consistency rule may include consistency across multiple data elements, and may define data element relevance and relation according to an algorithm and/or business related routine. Rule data 254 may describe the rule and may be derived according to profile data 252.
  • At step 1008, process 1000 may include assessing newly extracted data elements based on a comparison of the newly extracted data elements to the second set of suggested rules. Rules may be generated automatically and also may be monitored by users prior to their application to additional data sets. Once approved by a user, new rules may be automatically applied to extracted data on a periodic basis and may indicate based on a determined threshold whether a data element is of acceptable quality for use by an organization, and whether or not it passes or fails. The comparison may include an identification of data defects determined by processor 220 or network server 160. Further, user actions may also operate as an input to the rule suggestion framework and may further refines rule suggestion. The process may further include clustering the assessed data elements into multiple segments according to detected defects detected in the assessed data elements. Based on the rules application, defects may be triggered. Identification of data defects may be collated, and a data quality score may be assigned to an element, whether or not the data possesses any defects (as further referenced in FIGS. 5-9). Based on an assigned score or ranking, a data element may be classified or categorized as either possessing failing or passing data quality. Based on this classification, data elements may be clustered or segments. Clustering of data may also include more than two classifications.
  • At step 1010, process 1000 may analyze data quality corresponding to defects detected in the assessed data elements. A decision tree algorithm from a comprehensive platform or from a third-party tool may be used to find pockets of defect concentration (as illustrated in FIG. 9). A defect may be defined as an event, and a population of data may be divided into multivariate subpopulations based on an event rate or concentration of defects. The data may be isolated and analyzed according to detected defects. Curated settings adjusted by a user may be used to identify nodes with a large concentration of defects (as illustrated in FIG. 9). In a tool, a user can select the type of defect on which he or she wants to do further analysis, and can generate the list of defects on a click of a button. Based on identification and analysis of data, data can be further clustered, segmented, and used to provide an overall assessment of data health. Additionally, data with defects can be disregarded.
  • At step 1012, process 1000 may deliver an output of the data quality to a personal device associated for display and data monitoring by a user. A summary dashboard may be generated (as referenced in FIGS. 4-9) to display data quality results on an ongoing basis. The dashboard may be displayed as part of a web browser, and may be displayed as part of an application executed on a personal device 120. Results, details, and trends may be displayed in a graphical user interface. Additionally, historical results may be displayed. Monitoring of existing data may be compared with historical runs. An element's summary statistics may also be displayed to help in monitoring any sudden change in the data quality metrics of one or more data elements. The dashboard may integrate data quality results of multiple data tables and datasets into one singular view and summarize “Data Health” for a group of datasets at an LOB level or at a Database level. “Data Health” may be defined as percentage of elements passing all the tests by all the variables at a particular grouping level.
  • FIG. 11 is a flow chart illustrating another exemplary process 1100 that one or more processors may perform in accordance with the disclosed embodiments. While the exemplary process 1100 is described herein as a series of steps, it is to be understood that the order of the steps may vary in other implementations. In particular, steps may be performed in any order, or in parallel.
  • At step 1102, process 1100 may include providing a plurality of user-adjustable data feature settings to a user. The user-adjustable data feature settings may be provided as a tab, a slider, a drop-down menu, or a toggle. The user-adjustable data feature settings may include at least one of a variable, a threshold, a rule, a score, or a test. A graphical user interface (GUI) as described (with reference to FIGS. 4-9) may include a plurality of user-adjustable data feature settings. Other user-adjustable data feature settings (not shown) may be contemplated.
  • At step 1104, process 1100 may include detecting a user interaction modifying the user-adjustable data feature settings. A detection may include one or more pressure sensors detecting a user's finger pressing on a user interface panel of a personal device 120. A detection may also include processor 220 detecting a change in a selection of the
  • user-adjustable data feature settings. For example, a change in selection of a drop-down menu or slider may indicate that user-adjustable features have been modified according to a variable and/or rule. Other detection mechanisms may be contemplated.
  • At step 1106, process 1100 may include determining customized data feature settings based on historical user interaction. For example, GUI data 256 may be stored as representative of customized user preferences for monitoring users. GUI data 256 may be associated with profile data 252 and rule data 254. GUI data 256 may be customized according to user, profile, rule, variable, cluster defect segmentation, and/or additional categories. Capturing of user selection/customization of rules over time to may also allow for better organizational recommendations. Other customized data feature settings may be contemplated.
  • At step 1108, process 1100 may include automatically creating a first set of applicable rules based on the customized data feature settings. Codes may be automatically generated for rules. Automatic creation of the first set of applicable rules may be associated with representative data profile, historical data, and metadata, and processors 220 may be configured to store the representative data profile, historical data, and metadata in a cloud network. An automatically created first set of applicable rules rule may assess data quality according to a determined threshold. The threshold may be determined by a monitoring user as illustrated in FIG. 5. Alternatively, a threshold may be determined automatically based a previously extracted data elements and the representative data profile. The threshold may define criteria for acceptable or unacceptable data quality. Processor 220 may be configured to define a conditional rule by identifying subpopulations, historical data, and contingent relationships between the extracted data elements. A rule may also comprise at least one of a null rule, a range rule, a uniqueness rule, a valid value rule, a conditional rule, or a consistency rule. Other rule types and/or formats may be contemplated.
  • While the present disclosure has been shown and described with reference to particular embodiments thereof, it will be understood that the present disclosure can be practiced, without modification, in other environments. The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, for example, hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, or other optical drive media.
  • Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. Various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets.
  • Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.

Claims (20)

What is claimed is:
1. A system for providing data quality management, the system comprising:
at least one memory storing instructions; and
at least one processor connected to a network and executing the instructions to perform operations comprising:
obtaining a set of rules for assessing data quality of a set of data elements;
extracting a plurality of data elements from a data source;
assessing the data elements based on a comparison of the data elements to the set of rules;
detecting a plurality of defects based on the comparison;
determining, using a decision tree algorithm, data quality according to the detected defects, the data quality comprising a pocket of defect concentration; and
transmitting, to a client device, instructions, to display a representation of the determined pocket of defect concentration in a user interface of the client device.
2. The system of claim 1, wherein the operations further comprise:
clustering the data elements into multiple segments based on the defects.
3. The system of claim 1, wherein assessing the data elements includes:
generating a first set of rules based on a property of the set of data elements;
generating a second set of rules based on the set of data elements and the first set of rules; and
assessing the data elements based on the second set of rules.
4. The system of claim 3, wherein the first set of rules includes a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule, and wherein the second set of rules includes a framework criteria comprising a support value, a confidence value, or a lift ratio.
5. A method for providing data quality management, the method comprising:
obtaining a set of rules for assessing data quality of a set of data elements;
extracting a plurality of data elements from a data source;
assessing the data elements based on a comparison of the data elements to the set of rules;
detecting a plurality of defects based on the comparison;
determining data quality according to the detected defects, the data quality comprising a pocket of defect concentration; and
transmitting, to a client device, information related to the determined pocket of defect concentration to a client device.
6. The method of claim 5, further comprising:
clustering the data elements into multiple segments based on the defects.
7. The method of claim 5, wherein obtaining the set of rules includes:
generating a first set of rules based on a property of the set of data elements; and
generating a second set of rules based on the set of data elements and the first set of rules.
8. The method of claim 7, wherein assessing the data elements includes:
assessing the data elements based on the second set of rules.
9. The method of claim 7, wherein the first set of rules includes a null rule, a range rule, a uniqueness rule, a valid value rule, a format rule, a conditional rule, or a consistency rule.
10. The method of claim 7, wherein the second set of rules includes a framework criteria comprising a support value, a confidence value, or a lift ratio.
11. The method of claim 7, wherein obtaining the set of rules includes:
generating the second set of rules based on a user input for adjusting the first set of rules.
12. The method of claim 7, wherein obtaining the set of rules includes:
generating the first set of rules based on a property of the set of data elements.
13. The method of claim 12, wherein the property includes a portion of a name of the first data elements.
14. The method of claim 5, wherein obtaining the set of rules includes:
generating a data profile based on the set of data elements; and
generating a first set of rules based on the data profile.
15. The method of claim 14, wherein the data profile includes organizational information, authentication timestamp information, network node location, network node preference, access point location, or access point preference.
16. The method of claim 5, wherein extracting the data elements includes parsing the data elements according to an alphanumeric identifier or a data list from the data source.
17. A non-transitory computer-readable medium for providing data quality management, comprising instructions that, when executed by one or more processors, cause operations comprising:
obtaining set of rules for assessing data quality of a set of data elements;
extracting a plurality of data elements from a data source;
assessing the data elements based on a comparison of the data elements to the set of rules;
detecting a plurality of defects based on the comparison;
determining data quality according to the detected defects, the data quality comprising a pocket of defect concentration; and
transmitting, to a client device, information related to the determined pocket of defect concentration to a client device.
18. The computer-readable medium of claim 17, wherein the operations further comprise:
clustering the data elements into multiple segments based on the defects.
19. The computer-readable medium of claim 17, wherein obtaining the set of rules includes:
generating a first set of rules based on a property of the set of data elements; and
generating a second set of rules based on the set of data elements and the first set of rules.
20. The computer-readable medium of claim 19, wherein assessing the data elements includes:
assessing the data elements based on the second set of rules.
US17/326,803 2016-12-19 2021-05-21 Systems and methods for providing data quality management Abandoned US20210279215A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/326,803 US20210279215A1 (en) 2016-12-19 2021-05-21 Systems and methods for providing data quality management

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662436258P 2016-12-19 2016-12-19
US15/847,674 US10185728B2 (en) 2016-12-19 2017-12-19 Systems and methods for providing data quality management
US16/252,024 US11030167B2 (en) 2016-12-19 2019-01-18 Systems and methods for providing data quality management
US17/326,803 US20210279215A1 (en) 2016-12-19 2021-05-21 Systems and methods for providing data quality management

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/252,024 Continuation US11030167B2 (en) 2016-12-19 2019-01-18 Systems and methods for providing data quality management

Publications (1)

Publication Number Publication Date
US20210279215A1 true US20210279215A1 (en) 2021-09-09

Family

ID=62561741

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/847,674 Active US10185728B2 (en) 2016-12-19 2017-12-19 Systems and methods for providing data quality management
US16/252,024 Active 2038-07-23 US11030167B2 (en) 2016-12-19 2019-01-18 Systems and methods for providing data quality management
US17/326,803 Abandoned US20210279215A1 (en) 2016-12-19 2021-05-21 Systems and methods for providing data quality management

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US15/847,674 Active US10185728B2 (en) 2016-12-19 2017-12-19 Systems and methods for providing data quality management
US16/252,024 Active 2038-07-23 US11030167B2 (en) 2016-12-19 2019-01-18 Systems and methods for providing data quality management

Country Status (2)

Country Link
US (3) US10185728B2 (en)
CA (1) CA2989617A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024155676A1 (en) * 2023-01-18 2024-07-25 Visa International Service Association Data segmentation using clustering and decision tree
US12093243B1 (en) 2023-01-09 2024-09-17 Wells Fargo Bank, N.A. Metadata quality monitoring and remediation

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10185728B2 (en) * 2016-12-19 2019-01-22 Capital One Services, Llc Systems and methods for providing data quality management
US10795901B2 (en) * 2017-05-09 2020-10-06 Jpmorgan Chase Bank, N.A. Generic entry and exit network interface system and method
US11176107B2 (en) * 2018-12-07 2021-11-16 International Business Machines Corporation Processing data records in a multi-tenant environment to ensure data quality
US11113340B2 (en) * 2018-12-21 2021-09-07 Jpmorgan Chase Bank, N.A. Data generation and certification
CN111400288A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Data quality inspection method and system
CN110109768B (en) * 2019-03-29 2023-02-17 创新先进技术有限公司 Data quality inspection method and device
US11157470B2 (en) * 2019-06-03 2021-10-26 International Business Machines Corporation Method and system for data quality delta analysis on a dataset
US11461671B2 (en) 2019-06-03 2022-10-04 Bank Of America Corporation Data quality tool
US11645659B2 (en) * 2019-07-31 2023-05-09 Nutanix, Inc. Facilitating customers to define policies for their clouds
US11366826B2 (en) * 2019-09-05 2022-06-21 International Business Machines Corporation Customizing data visualizations according to user activity
CN111143159A (en) * 2019-12-05 2020-05-12 江苏东智数据技术股份有限公司 Data monitoring method and device
US12013840B2 (en) * 2020-04-17 2024-06-18 International Business Machines Corporation Dynamic discovery and correction of data quality issues
CN112035456B (en) * 2020-08-31 2024-05-03 重庆长安汽车股份有限公司 Real-time detection method for user behavior data quality and storage medium
US20220076157A1 (en) * 2020-09-04 2022-03-10 Aperio Global, LLC Data analysis system using artificial intelligence
CN112101447B (en) * 2020-09-10 2024-04-16 北京百度网讯科技有限公司 Quality evaluation method, device, equipment and storage medium for data set
US11314818B2 (en) * 2020-09-11 2022-04-26 Talend Sas Data set inventory and trust score determination
CN113051292A (en) * 2021-04-19 2021-06-29 中国工商银行股份有限公司 Data checking method and device
US20240256505A1 (en) * 2021-05-13 2024-08-01 Schlumberger Technology Corporation Dynamic oil and gas data quality visualization suggestion
CN113282957A (en) * 2021-06-03 2021-08-20 光大科技有限公司 Data asset racking processing method and device
CN113900771B (en) * 2021-10-14 2024-03-12 苏州申浪信息科技有限公司 Industrial data transmission method using container cloud host
WO2024039017A1 (en) * 2022-08-16 2024-02-22 Samsung Electronics Co., Ltd. Method and apparatus for managing quality of data
US12050568B2 (en) * 2022-09-09 2024-07-30 Genworth Holdings, Inc. System and method for implementing a data quality framework and engine
CN115630920A (en) * 2022-10-20 2023-01-20 中铁一局集团有限公司 Engineering equipment instrument management system based on cloud platform
CN115827616A (en) * 2022-12-15 2023-03-21 国网江苏省电力有限公司 Power work order data quality management method, system, medium and computing device
CN116049157B (en) * 2023-01-04 2024-05-07 北京京航计算通讯研究所 Quality data analysis method and system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4902469A (en) * 1986-05-05 1990-02-20 Westinghouse Electric Corp. Status tree monitoring and display system
US5860011A (en) * 1996-02-29 1999-01-12 Parasoft Corporation Method and system for automatically checking computer source code quality based on rules
US20080097789A1 (en) * 2006-10-24 2008-04-24 Huffer Robert L Quality Management Of Patient Data For Health Care Providers
US20100082627A1 (en) * 2008-09-24 2010-04-01 Yahoo! Inc. Optimization filters for user generated content searches
US20120072464A1 (en) * 2010-09-16 2012-03-22 Ronen Cohen Systems and methods for master data management using record and field based rules
US20120197887A1 (en) * 2011-01-28 2012-08-02 Ab Initio Technology Llc Generating data pattern information
US20130031044A1 (en) * 2011-07-29 2013-01-31 Accenture Global Services Limited Data quality management
US20130055042A1 (en) * 2011-08-31 2013-02-28 Accenture Global Services Limited Data quality analysis and management system
US20130173322A1 (en) * 2011-12-30 2013-07-04 Schneider Electric USA, Inc. Energy Management with Correspondence Based Data Auditing Signoff
US8700577B2 (en) * 2009-12-07 2014-04-15 Accenture Global Services Limited GmbH Method and system for accelerated data quality enhancement
US9158805B1 (en) * 2013-03-12 2015-10-13 Amazon Technologies, Inc. Statistical data quality determination for storage systems
US20160103823A1 (en) * 2014-10-10 2016-04-14 The Trustees Of Columbia University In The City Of New York Machine Learning Extraction of Free-Form Textual Rules and Provisions From Legal Documents
US20170004413A1 (en) * 2015-06-30 2017-01-05 The Boeing Company Data driven classification and data quality checking system
US10185728B2 (en) * 2016-12-19 2019-01-22 Capital One Services, Llc Systems and methods for providing data quality management
US20200320632A1 (en) * 2015-12-24 2020-10-08 Jpmorgan Chase Bank, N.A. Method and system for time series data quality management

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5032978A (en) * 1986-05-05 1991-07-16 Westinghouse Electric Co. Status tree monitoring and display system
US8099257B2 (en) * 2001-08-24 2012-01-17 Bio-Rad Laboratories, Inc. Biometric quality control process
AU2002313818B2 (en) * 2001-08-24 2007-12-20 Bio-Rad Laboratories, Inc. Biometric quality control process
US20060020641A1 (en) * 2002-03-25 2006-01-26 Data Quality Solutions Business process management system and method
JP2004012422A (en) * 2002-06-11 2004-01-15 Dainippon Screen Mfg Co Ltd Pattern inspection device, pattern inspection method, and program
US7602962B2 (en) * 2003-02-25 2009-10-13 Hitachi High-Technologies Corporation Method of classifying defects using multiple inspection machines
US8200775B2 (en) * 2005-02-01 2012-06-12 Newsilike Media Group, Inc Enhanced syndication
US7620888B2 (en) * 2003-12-04 2009-11-17 Microsoft Corporation Quality enhancement systems and methods for technical documentation
GB2419974A (en) * 2004-11-09 2006-05-10 Finsoft Ltd Calculating the quality of a data record
US8782087B2 (en) * 2005-03-18 2014-07-15 Beyondcore, Inc. Analyzing large data sets to find deviation patterns
US7849062B1 (en) * 2005-03-18 2010-12-07 Beyondcore, Inc. Identifying and using critical fields in quality management
US9940405B2 (en) * 2011-04-05 2018-04-10 Beyondcore Holdings, Llc Automatically optimizing business process platforms
AU2006260795A1 (en) * 2005-06-20 2006-12-28 Future Route Limited Analytical system for discovery and generation of rules to predict and detect anomalies in data and financial fraud
US7814111B2 (en) * 2006-01-03 2010-10-12 Microsoft International Holdings B.V. Detection of patterns in data records
US7711660B1 (en) * 2006-02-16 2010-05-04 Ingenix, Inc. Processing health insurance data utilizing data quality rules
US7912628B2 (en) * 2006-03-03 2011-03-22 Inrix, Inc. Determining road traffic conditions using data from multiple data sources
US10552899B2 (en) * 2006-03-20 2020-02-04 Rebecca S. Busch Anomaly tracking system and method for detecting fraud and errors in the healthcare field
JP4616864B2 (en) * 2007-06-20 2011-01-19 株式会社日立ハイテクノロジーズ Appearance inspection method and apparatus, and image processing evaluation system
JP5148306B2 (en) * 2008-01-31 2013-02-20 シスメックス株式会社 Accuracy management system for analysis apparatus, management apparatus, and information providing method
US8843487B2 (en) * 2009-08-18 2014-09-23 Black Oak Partners, Llc Process and method for data assurance management by applying data assurance metrics
US8589859B2 (en) * 2009-09-01 2013-11-19 Accenture Global Services Limited Collection and processing of code development information
US8250008B1 (en) * 2009-09-22 2012-08-21 Google Inc. Decision tree refinement
US8412735B2 (en) * 2009-12-07 2013-04-02 Accenture Global Services Limited Data quality enhancement for smart grid applications
US8515863B1 (en) * 2010-09-01 2013-08-20 Federal Home Loan Mortgage Corporation Systems and methods for measuring data quality over time
WO2012061162A1 (en) * 2010-10-25 2012-05-10 Intelius Inc. Cost-sensitive alternating decision trees for record linkage
US8751436B2 (en) * 2010-11-17 2014-06-10 Bank Of America Corporation Analyzing data quality
US20120149339A1 (en) * 2010-12-10 2012-06-14 MobileIron, Inc. Archiving Text Messages
US20120150825A1 (en) * 2010-12-13 2012-06-14 International Business Machines Corporation Cleansing a Database System to Improve Data Quality
US8577849B2 (en) * 2011-05-18 2013-11-05 Qatar Foundation Guided data repair
US9330148B2 (en) * 2011-06-30 2016-05-03 International Business Machines Corporation Adapting data quality rules based upon user application requirements
US10248672B2 (en) * 2011-09-19 2019-04-02 Citigroup Technology, Inc. Methods and systems for assessing data quality
US20130166515A1 (en) * 2011-12-22 2013-06-27 David Kung Generating validation rules for a data report based on profiling the data report in a data processing tool
US9152662B2 (en) * 2012-01-16 2015-10-06 Tata Consultancy Services Limited Data quality analysis
US9401013B2 (en) * 2012-02-03 2016-07-26 Applied Materials Israel, Ltd. Method of design-based defect classification and system thereof
US10083483B2 (en) * 2013-01-09 2018-09-25 Bank Of America Corporation Actionable exception alerts
US9576036B2 (en) * 2013-03-15 2017-02-21 International Business Machines Corporation Self-analyzing data processing job to determine data quality issues
US9626392B2 (en) * 2013-03-29 2017-04-18 Schlumberger Technology Corporation Context transfer for data storage
US9310320B2 (en) * 2013-04-15 2016-04-12 Kla-Tencor Corp. Based sampling and binning for yield critical defects
US9292599B2 (en) * 2013-04-30 2016-03-22 Wal-Mart Stores, Inc. Decision-tree based quantitative and qualitative record classification
US9427185B2 (en) * 2013-06-20 2016-08-30 Microsoft Technology Licensing, Llc User behavior monitoring on a computerized device
DE102013224378A1 (en) * 2013-09-18 2015-03-19 Rohde & Schwarz Gmbh & Co. Kg Automated evaluation of test protocols in the telecommunications sector
US10572456B2 (en) * 2013-09-24 2020-02-25 Here Global B.V. Method, apparatus, and computer program product for data quality analysis
US9489599B2 (en) * 2013-11-03 2016-11-08 Kla-Tencor Corp. Decision tree construction for automatic classification of defects on semiconductor wafers
US9529851B1 (en) * 2013-12-02 2016-12-27 Experian Information Solutions, Inc. Server architecture for electronic data quality processing
GB201322057D0 (en) * 2013-12-13 2014-01-29 Qatar Foundation Descriptive and prescriptive data cleaning
US10089409B2 (en) * 2014-04-29 2018-10-02 Microsoft Technology Licensing, Llc Event-triggered data quality verification
US10877955B2 (en) * 2014-04-29 2020-12-29 Microsoft Technology Licensing, Llc Using lineage to infer data quality issues
US9652489B2 (en) * 2014-08-26 2017-05-16 Bank Of America Corporation Compliance verification system
US20160063441A1 (en) * 2014-08-29 2016-03-03 Linkedln Corporation Job poster identification
US9600504B2 (en) * 2014-09-08 2017-03-21 International Business Machines Corporation Data quality analysis and cleansing of source data with respect to a target system
GB201417129D0 (en) * 2014-09-29 2014-11-12 Ibm A method of processing data errors for a data processing system
US20160162507A1 (en) * 2014-12-05 2016-06-09 International Business Machines Corporation Automated data duplicate identification
US20160178414A1 (en) * 2014-12-17 2016-06-23 General Electric Company System and methods for addressing data quality issues in industrial data
US9417076B2 (en) * 2014-12-29 2016-08-16 Here Global B.V. Total route score to measure quality of map content
US10354210B2 (en) * 2015-04-16 2019-07-16 Entit Software Llc Quality prediction
US10354419B2 (en) * 2015-05-25 2019-07-16 Colin Frederick Ritchie Methods and systems for dynamic graph generating
US9922269B2 (en) * 2015-06-05 2018-03-20 Kla-Tencor Corporation Method and system for iterative defect classification
US20160364648A1 (en) * 2015-06-09 2016-12-15 Florida Power And Light Company Outage prevention in an electric power distribution grid using smart meter messaging
US10409802B2 (en) * 2015-06-12 2019-09-10 Ab Initio Technology Llc Data quality analysis
US10083403B2 (en) * 2015-06-30 2018-09-25 The Boeing Company Data driven classification and data quality checking method
US10127264B1 (en) * 2015-09-17 2018-11-13 Ab Initio Technology Llc Techniques for automated data analysis
US9779316B2 (en) * 2015-09-30 2017-10-03 Tyco Fire & Security Gmbh Scalable and distributed biometric processing
US20170103101A1 (en) * 2015-10-07 2017-04-13 Telogis, Inc. System for database data quality processing
US10387230B2 (en) * 2016-02-24 2019-08-20 Bank Of America Corporation Technical language processor administration
US10366367B2 (en) * 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating and modifying technology change events
US10776740B2 (en) * 2016-06-07 2020-09-15 International Business Machines Corporation Detecting potential root causes of data quality issues using data lineage graphs
US10331542B2 (en) * 2016-06-23 2019-06-25 International Business Machines Corporation System and method for detecting and alerting unexpected behavior of software applications
US20170372232A1 (en) * 2016-06-27 2017-12-28 Purepredictive, Inc. Data quality detection and compensation for machine learning
US10185884B2 (en) * 2016-09-07 2019-01-22 Apple Inc. Multi-dimensional objective metric concentering
US10002140B2 (en) * 2016-09-26 2018-06-19 Uber Technologies, Inc. Geographical location search using multiple data sources
US10585864B2 (en) * 2016-11-11 2020-03-10 International Business Machines Corporation Computing the need for standardization of a set of values
US11379537B2 (en) * 2016-11-18 2022-07-05 Accenture Global Solutions Limited Closed-loop unified metadata architecture with universal metadata repository
US10296880B2 (en) * 2016-11-21 2019-05-21 Lisa Therese Miller Invoice analytics system
US10620618B2 (en) * 2016-12-20 2020-04-14 Palantir Technologies Inc. Systems and methods for determining relationships between defects
US10379920B2 (en) * 2017-06-23 2019-08-13 Accenture Global Solutions Limited Processing data to improve a quality of the data

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4902469A (en) * 1986-05-05 1990-02-20 Westinghouse Electric Corp. Status tree monitoring and display system
US5860011A (en) * 1996-02-29 1999-01-12 Parasoft Corporation Method and system for automatically checking computer source code quality based on rules
US20080097789A1 (en) * 2006-10-24 2008-04-24 Huffer Robert L Quality Management Of Patient Data For Health Care Providers
US20100082627A1 (en) * 2008-09-24 2010-04-01 Yahoo! Inc. Optimization filters for user generated content searches
US8700577B2 (en) * 2009-12-07 2014-04-15 Accenture Global Services Limited GmbH Method and system for accelerated data quality enhancement
US20120072464A1 (en) * 2010-09-16 2012-03-22 Ronen Cohen Systems and methods for master data management using record and field based rules
US20120197887A1 (en) * 2011-01-28 2012-08-02 Ab Initio Technology Llc Generating data pattern information
US20130031044A1 (en) * 2011-07-29 2013-01-31 Accenture Global Services Limited Data quality management
US20130055042A1 (en) * 2011-08-31 2013-02-28 Accenture Global Services Limited Data quality analysis and management system
US20130173322A1 (en) * 2011-12-30 2013-07-04 Schneider Electric USA, Inc. Energy Management with Correspondence Based Data Auditing Signoff
US9158805B1 (en) * 2013-03-12 2015-10-13 Amazon Technologies, Inc. Statistical data quality determination for storage systems
US20160103823A1 (en) * 2014-10-10 2016-04-14 The Trustees Of Columbia University In The City Of New York Machine Learning Extraction of Free-Form Textual Rules and Provisions From Legal Documents
US20170004413A1 (en) * 2015-06-30 2017-01-05 The Boeing Company Data driven classification and data quality checking system
US20200320632A1 (en) * 2015-12-24 2020-10-08 Jpmorgan Chase Bank, N.A. Method and system for time series data quality management
US10185728B2 (en) * 2016-12-19 2019-01-22 Capital One Services, Llc Systems and methods for providing data quality management
US11030167B2 (en) * 2016-12-19 2021-06-08 Capital One Services, Llc Systems and methods for providing data quality management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hamad et al., "An Enhanced Technique to Clean Data in the Data Warehouse", 2011 Developments in E-systems Engineering, IEEE Computer Society, 2011, pp. 306-311. (Year: 2011) *
Klas et al., "Quality Evaluation for Big Data - A Scalable Assessment Approach and First Evaluation Results", 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement, IEEE, 2016, pp. 115-124. (Year: 2016) *
Kumar et al., "Efficient Quality Assessment Technique with Integrated Cluster Validation and Decision Trees", International Journal of Computer Applications (0975 - 8887), Volume 21, No. 9, May 2011, pp. 30-36. (Year: 2011) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12093243B1 (en) 2023-01-09 2024-09-17 Wells Fargo Bank, N.A. Metadata quality monitoring and remediation
WO2024155676A1 (en) * 2023-01-18 2024-07-25 Visa International Service Association Data segmentation using clustering and decision tree

Also Published As

Publication number Publication date
US20190155797A1 (en) 2019-05-23
CA2989617A1 (en) 2018-06-19
US10185728B2 (en) 2019-01-22
US20180173733A1 (en) 2018-06-21
US11030167B2 (en) 2021-06-08

Similar Documents

Publication Publication Date Title
US20210279215A1 (en) Systems and methods for providing data quality management
US11670021B1 (en) Enhanced graphical user interface for representing events
US11102224B2 (en) Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
US11012404B1 (en) Transaction lifecycle management
US10884891B2 (en) Interactive detection of system anomalies
US10033611B1 (en) Transaction lifecycle management
US10121157B2 (en) Recommending user actions based on collective intelligence for a multi-tenant data analysis system
US8370181B2 (en) System and method for supply chain data mining and analysis
WO2019023982A1 (en) Multi-dimensional industrial knowledge graph
US10019681B2 (en) Multidimensional recursive learning process and system used to discover complex dyadic or multiple counterparty relationships
JP5017434B2 (en) Information processing apparatus and program
Guglielmi et al. Semiparametric Bayesian models for clustering and classification in the presence of unbalanced in-hospital survival
US11816112B1 (en) Systems and methods for automated process discovery
JP2009009342A (en) Information processing unit and program
US10417201B2 (en) Systems and methods for adaptively identifying and mitigating statistical outliers in aggregated data
JP6838150B2 (en) Data name classification support device and data name classification support program
JP2019101829A (en) Software component management system, computor, and method
JP6403864B2 (en) Service design support system and service design support method
US20150112771A1 (en) Systems, methods, and program products for enhancing performance of an enterprise computer system
US12020046B1 (en) Systems and methods for automated process discovery
KR20180073302A (en) System and method for analyzing alarm information in mulitple time-series monitoring system
Mollá et al. Data stream solution for decision-making processes: a general and adaptive system for decision support
CN117495078A (en) User-guided risk prioritization method and system in process
KR20240104530A (en) Analysis process navigation-based meta-analysis method and apparatus
CN118797273A (en) Data analysis system, method, equipment and medium based on government industry large model

Legal Events

Date Code Title Description
AS Assignment

Owner name: CAPITAL ONE SERVICES, LLC, VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NATH, YATINDRA;GARG, ANKUR;TIWARI, RAJEEV;AND OTHERS;SIGNING DATES FROM 20170904 TO 20190529;REEL/FRAME:056313/0846

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION