WO2023034328A2 - Corrélation de données parallélisées provenant de sources de données disparates pour agréger des parties de données de graphes afin d'identifier de manière prédictive des données d'entité - Google Patents

Corrélation de données parallélisées provenant de sources de données disparates pour agréger des parties de données de graphes afin d'identifier de manière prédictive des données d'entité Download PDF

Info

Publication number
WO2023034328A2
WO2023034328A2 PCT/US2022/042077 US2022042077W WO2023034328A2 WO 2023034328 A2 WO2023034328 A2 WO 2023034328A2 US 2022042077 W US2022042077 W US 2022042077W WO 2023034328 A2 WO2023034328 A2 WO 2023034328A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
parallelized
graph
representing
subset
Prior art date
Application number
PCT/US2022/042077
Other languages
English (en)
Other versions
WO2023034328A3 (fr
Inventor
Shawn Andrew Pardue Smith
Bryon Kristen Jacob
Original Assignee
Data.World, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/461,982 external-priority patent/US11755602B2/en
Application filed by Data.World, Inc. filed Critical Data.World, Inc.
Publication of WO2023034328A2 publication Critical patent/WO2023034328A2/fr
Publication of WO2023034328A3 publication Critical patent/WO2023034328A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification

Definitions

  • Various embodiments relate generally to data science and data analysis, computer software and systems, and data-driven control systems and algorithms based on graph-based data arrangements, among other things, and, more specifically, to a computing platform configured to receive or analyze datasets in parallel by implementing, for example, parallel computing processor systems to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data and to aggregate graph data portions, among other things.
  • data management and analysis applications such as query programming language applications and data analytic applications
  • a distributed data architecture such as a “cloud” -based computing platform.
  • data practitioners generally may be required to intervene manually to apply derived formulaic data models to datasets, such as using local computing resources, which requires a burden to update and maintain data terms and definitions as well as storing relatively large amounts of data trying un a ai u dlar data format and database schema (e.g., relying on relational databases and relational table data formats).
  • FIG. 1 is a diagram depicting a computing system configured to correlate portions of parallelized data to identify entity data predictively, according to some embodiments
  • FIG. 2 depicts an example of a dataset ingestion controller, according to some examples
  • FIG. 3 depicts an example of an attribute correlator, according to some examples
  • FIG. 4 illustrates an exemplary layered architecture for implementing an collaborative dataset consolidation system application, according to some examples
  • FIG. 5 is a flow diagram as an example of analyzing parallelized data formatted as graphbased data with correlated data attributes to consolidate observation data to form content graph portions, according to some embodiments;
  • FIG. 6 is a flow diagram as an example of forming an ancillary graph to remediate data for integration into an enriched arrangement of graph data, according to some embodiments
  • FIG. 7 is a flow diagram as an example of modifying the content graph for integration into an enriched arrangement of graph data, according to some embodiments.
  • FIG. 8 depicts an example of a portion of a content graph, according to some examples.
  • FIGs. 9A and 9B depict examples of correlating attribute data values to construct a content graph portion, at least in some examples
  • FIGs. 10A and 10B depict other examples of correlating attribute data values to construct a content graph portion, at least in some examples
  • FIGs. 11A and 11B depict examples of clustering units of observation data of content graph portions to identify an individual entity, at least in some examples
  • FIG. 12 depicts an example of data representing aggregating individual entities to form aggregated data, according to some examples
  • FIG. 13 depicts an example of a data catalog and a knowledge graph implement as a cloud-based service, according to some examples;
  • FIG. 15 illustrates examples of various computing platforms configured to provide various functionalities to components of a computing platform 1500 configured to provide functionalities described herein.
  • “software” or “application” may also be used interchangeably or synonymously with, or refer to, a computer program, software, program, firmware, or any other term that may be used to describe, reference, or refer to a logical set of instructions that, when executed, performs a function or set of functions in association with a computing system or iiia imit, itgaiuitsS of whether physical, logical, or virtual and without restriction or limitation to any particular implementation, design, configuration, instance, or state.
  • platform may refer to any type of computer hardware (hereafter “hardware”) or software, or any combination thereof, that may use one or more local, remote, distributed, networked, or computing cloud (hereafter “cloud”)-based computing resources (e.g., computers, clients, servers, tablets, notebooks, smart phones, cell phones, mobile computing platforms or tablets, and the like) to provide an application, operating system, or other computing environment, such as those described herein, without restriction or limitation to any particular implementation, design, configuration, instance, or state.
  • cloud computing resources
  • Distributed resources such as cloud computing networks (also referred to interchangeably as “computing clouds,” “storage clouds,” “cloud networks,” or, simply, “clouds,” without restriction or limitation to any particular implementation, design, configuration, instance, or state) may be used for processing and/or storage of varying quantities, types, structures, and formats of data, without restriction or limitation to any particular implementation, design, or configuration.
  • cloud computing networks also referred to interchangeably as “computing clouds,” “storage clouds,” “cloud networks,” or, simply, “clouds,” without restriction or limitation to any particular implementation, design, configuration, instance, or state
  • data may be stored in various types of data structures including, but not limited to databases, data repositories, data warehouses, data stores, or other data structures or memory configured to store data in various computer programming languages and formats in accordance with various types of structured and unstructured database schemas such as SQL, MySQL, NoSQL, DynamoDBTM, etc. Also applicable are computer programming languages and formats similar or equivalent to those developed by data facility and computing providers such as Amazon® Web Services, Inc. of Seattle, Washington, FMP, Oracle®, Salesforce.com, Inc., or others, without limitation or restriction to any particular instance or implementation.
  • DynamoDBTM Amazon Elasticsearch Service
  • Amazon Kinesis Data Streams (“KDS”)TM Amazon Kinesis Data Analytics
  • AWS Amazon Web Services
  • cloud computing services include the Google® cloud platform that may implement a publisher-subscriber messaging service (e.g., Google® pub/sub architecture).
  • cloud computing and messaging services may include Apache Kafka, Apache Spark, and any other Apache software application and platforms, which are developed and maintained by Apache Software Foundation of Wilmington, Delaware, U.S.A.
  • references to databases, data structures, memory, or any type of data storage facility may include any embodiment as a local, remote, distributed, networked, cloud-based, or combined implementation thereof.
  • social networks and social media e.g., “social mtuia ) using ui ⁇ erent types of devices may generate (i.e., in the form of posts (which is to be distinguished from a POST request or call over HTTP) on social networks and social media) data in different forms, formats, layouts, data transfer protocols, and data storage schema for presentation on different types of devices that use, modify, or store data for purposes such as electronic messaging, audio or video rendering (e.g., user-generated content, such as deployed on YouTube®), content sharing, or like purposes.
  • social networks and social media e.g., “social mtuia ) using ui ⁇ erent types of devices may generate (i.e., in the form of posts (which is to be distinguished from a POST request or call over HTTP) on social networks and social media) data in different
  • Data may be generated in various formats such as text, audio, video (including three dimensional, augmented reality (“AR”), and virtual reality (“VR”)), or others, without limitation, as electronic messages for use on social networks, social media, and social applications (e.g., “social media”) such as Twitter® of San Francisco, California, Snapchat® as developed by Snap® of Venice, California, Messenger as developed by Facebook®, WhatsApp®, or Instagram® of Menlo Park, California, Pinterest® of San Francisco, California, Linkedln® of Mountain View, California, and others, without limitation or restriction.
  • social media such as Twitter® of San Francisco, California, Snapchat® as developed by Snap® of Venice, California, Messenger as developed by Facebook®, WhatsApp®, or Instagram® of Menlo Park, California, Pinterest® of San Francisco, California, Linkedln® of Mountain View, California, and others, without limitation or restriction.
  • the term “content” may refer to, for example, one or more of executable instructions (e.g., of an application, a program, or any other code compatible with a programming language), textual data, image data, video data, audio data, or any other data.
  • executable instructions e.g., of an application, a program, or any other code compatible with a programming language
  • data may be formatted and transmitted via electronic messaging channels (i.e., transferred over one or more data communication protocols) between computing resources using various types of data communication and transfer protocols such as Hypertext Transfer Protocol (“HTTP”), Transmission Control Protocol (“TCP”)/ Internet Protocol (“IP”), Internet Relay Chat (“IRC”), SMS, text messaging, instant messaging (“IM”), File Transfer Protocol (“FTP”), or others, without limitation.
  • HTTP Hypertext Transfer Protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • IRC Internet Relay Chat
  • SMS text messaging
  • IM instant messaging
  • FTP File Transfer Protocol
  • disclosed processes implemented as software may be programmed using Java®, JavaScript®, Scala, PythonTM, XML, HTML, and other data formats and programs, without limitation.
  • Disclosed processes herein may also implement software such as Streaming SQL applications, browser applications (e.g., FirefoxTM) and/or web applications, among others.
  • a browser application may implement a JavaScript framework, such as Ember .js, Meteor .js, ExtJS, AngularJS, and the like.
  • References to various layers of an application architecture may refer to a stacked layer application architecture such as the Open Systems Interconnect (“OSI”) model or others.
  • OSI Open Systems Interconnect
  • a distributed data file may include executable instructions as described above (e.g., JavaScript® or the like) or any data constituting content (e.g., text data, video data, audio data, etc.), or both.
  • ⁇ n umt examples, systems, software, platforms, and computing clouds, or any combination thereof, may be implemented to facilitate online distribution of subsets of units of any data, content, postings, electronic messages, and the like.
  • units of content, electronic postings, electronic messages, and the like may originate at social networks, social media, and social applications, or any other source of content.
  • FIG. 1 is a diagram depicting a computing system configured to correlate portions of parallelized data to identify entity data predictively, according to some embodiments.
  • Diagram 100 depicts an example of a networked (e.g., cloud-based) computing system, such as collaborative dataset consolidation system 150, that may be configured to access any amount of raw data via a network 191 from disparate data sources 190 to (1) analyze the data, to (2) deduplicate associated subsets of data through dataset consolidation, to (3) format data in a graph-based data format, to (4) resolve predictively identities of entities using data representations of objects and relationships among any number of entities, and to (5) provide any other number of functionalities.
  • collaborative dataset consolidation system 150 may be configured to generate and incrementally modify an arrangement of graph data using enriched data.
  • an arrangement of graph data may include an enriched arrangement of graph data 160.
  • an arrangement of graph data, such as enriched arrangement of graph data 160 may be configured to constitute a “knowledge graph.”
  • Data sources 190 may be accessed to provide any type of data in any format, such as structured data (e.g., data stored as data tables in relational databases accessible via, for example, SQL or other structured database languages), semistructured data (e.g., XML-formatted data, metadata, spreadsheet data, etc.), and unstructured data (e.g., PDF documents, GitHubTM Jupyter Notebook data, text document data, email document data, website data, etc.).
  • structured data e.g., data stored as data tables in relational databases accessible via, for example, SQL or other structured database languages
  • semistructured data e.g., XML-formatted data, metadata, spreadsheet data, etc.
  • unstructured data e.g., PDF documents, GitHubTM Jupyter Notebook data, text document data, email document data, website data, etc.
  • collaborative dataset consolidation system 150 may be configured to include any combination of hardware and software to analyze, deduplicate, format, and resolve data representing identities of entities, the data being received from data sources 190 as parallelized data 110.
  • collaborative dataset consolidation system 150 may be configured to process data from data resources 190 in parallel (or substantially in parallel), for example, in real-time or near real-time.
  • Collaborative dataset consolidation system 150 may include logic as hardware (e.g., multiple processors such as more than 200 to 1,600 core processors) and software, or any combination thereof, that may be configured to “massively parallel process” parallelized data 110 to analyze, deduplicate, format, and/or resolve data it hereby friedung lutiiuties of entities as each of parallelized data streams 101a to lOln.
  • collaborative dataset consolidation system 150 may be configured to access data with more than one thousand data sources 190 to identify 50 billion or more subsets of observation data (or units of observation data) during a time interval (e.g., 24 hours or less).
  • Parallelized processing of data e.g., raw data
  • data sources 190 facilitates rapid and expeditious deduplication and consolidation of data to form units of observation data with which to resolve to identify unique entities, according to some examples.
  • logic configured to implement dataset ingestion controller 130 and dataset attribute manager 141 may be replicated (not shown) to process each of parallelized data streams 101a to lOln in parallel, or substantially in parallel.
  • collaborative dataset consolidation system 150 may include a dataset ingestion controller 130 configured to remediate (e.g., “clean” and “prepare”) parallelized data 110 prior to conversion into another data format (e.g., a graph data structure) that may be stored locally or remotely such as a graph data node referring to an external data source, such as one of data sources 190.
  • dataset ingestion controller 130 may also include a dataset analyzer 132 and a format converter 137.
  • dataset analyzer 130 may include an inference engine 134, which may include a data classifier 134a and a data enhancement manager 134b.
  • collaborative dataset consolidation system 150 is shown also to include a dataset attribute manager 141, which includes an attribute correlator 142 and a data derivation calculator 143.
  • Dataset ingestion controller 130 and dataset attribute manager 141 may be communicatively coupled to exchange dataset-related data 147a and enrichment data 147b, whereby any of dataset ingestion controller 130 and dataset attribute manager 141 may exchange data from a number of sources (e.g., external data sources) that may include dataset metadata 103a (e.g., descriptor data or information specifying dataset attributes), dataset data 103b (e.g., reference data stored locally or remotely to access data in any local or remote data storage, such as data in data sources 190), schema data 103c (e.g., sources, such as schema.org, that may provide various types and vocabularies, glossaries, data dictionaries, and the like), and ontology data 103d from any suitable ontology and any other suitable types of data sources.
  • sources e.g., external data sources
  • sources
  • Diagram 100 depicts an example of a classifier 124a configured to classify any portion of parallelized data stream 101a to lOln as including a unit of observation data 102 associated with one or more attributes and data values, such as attribute data 104 and attribute 105.
  • Observation data can be data that may be classified as being associated with an entity, a uii cpi, a iv iv, ui • a classification of data (e.g., a “class” of data), such as a “person” associated with attributes including a name, an address, etc., or a “product” associated with attributes including a product name, a manufacturer, a stock-keeping unit (“SKU”), etc., or a “service” associated with attributes including a name of purveyor of such services, etc., or any other entity.
  • a classification of data e.g., a “class” of data
  • a “person” associated with attributes including a name, an address, etc. or a “product” associated with attributes including a product name, a manufacturer, a stock-keeping unit (“SKU”), etc.
  • SKU stock-keeping unit
  • service associated with attributes including a name of purveyor of such services, etc., or any other entity.
  • an “observation” or a unit of observation data may be refer to a data record that may include data representing attributes identifying a name, an address, a phone number, an email address, a customer number, a familial relationship, a gender, or any other attribute or characteristic of an object or individual entity.
  • a unit of observation data 102 may be correlatable to (or matched to) any number of attributes, such as attributes 104 and 105 as well as other attributes and/or data values.
  • data representing a unit of observation data 102 may be computed to be associated with a hash value referring to a content-addressed node of a graph data arrangement, whereby similar or equivalent hash values may be implemented to consolidate (e.g., collapse, integrate, or deduplicate) multiple data representations of entities into graph data representing an entity.
  • observation data 102 may be referred to as an “observation fingerprint” (e.g., an electronically digital fingerprint, or a portion thereof) associated with an entity, whereby each grouping of observation data 102 and attributes 104 and 105 may be considered as a subset of data representing an entity — that when aggregated or clustered with other equivalent observation fingerprints — provides enriched graph data that may be used to describe or identify a particular entity (e.g., uniquely identifying a specific person).
  • attribute may refer to, or may interchangeable with, the term “property.”
  • Dataset ingestion controller 130 or dataset enrichment manager 134b may be configured to generate a content graph 106 (or a portion thereof) based on one or more subsets of observation data 102 and attribute data 104 and 105.
  • content graph 106 may include at least one node representing observation data 102 and one or more other nodes each representing a data value (e.g., as attributes 104 and 105, or any other attribute or data value).
  • dataset enrichment manager 134b may be configured to correlate or predictively match groupings of content graph portions 106 to deduplicate redundant data and to consolidate attribute data to comprehensively generate enriched graph data that represents an entity.
  • dataset enrichment manager 134b may be configured to consolidate datasets and portions thereof.
  • Correlated attribute data 151 and 153 may be transmitted as enrichment data 147b to facilitate aggregating or clustering of content graph portions 106 at dataset enrichment manager 134b to form or modify at least a portion of enriched arrangement of graph data 160.
  • Dataset attribute manager 141 and attribute correlator 142 may be configured to electronically interact to aggregate or cluster content graph portions 106 to identify an individual entity (e.g., a person or a product), and may be further configured to aggregate or cluster aggregated content graph portions 106 to identify a hierarchical entity to which individual entities may be associated (e.g., a household or a manufacturer).
  • dataset analyzer 132 and any of its components, including inference engine 134 may be configured to analyze datasets of parallelized data 110 to detect or determine whether ingested data has an anomaly relating to data (e.g., improper or unexpected data formats, types or values) or to a structure of a data arrangement in which the data is disposed.
  • inference engine 134 may be configured to analyze parallelized data 110 to identify tentative anomalies and to determine (e.g., infer or predict) one or more corrective actions. In some cases, inference engine 134 may predict a most-likely solution relative to other solutions for automatic resolution to clean and prepare data.
  • dataset analyzer 132 may be configured to correct an anomaly (e.g., to correct or confirm data, such as data that might refer to a U.S. state name, such as “Texas,” rather than “TX”).
  • Dataset analyzer 132 and any of its components may be configured to perform an action based on any of a number of statistical computations, including Bayesian techniques, linear regression, natural language processing (“NLP”) techniques, machine-learning techniques, deep-learning techniques, etc.
  • dataset analyzer 132 may be configured to identify and correct or quarantine invalid data values or outlier data values (e.g., out-of-range data values).
  • dataset analyzer 132 may facilitate corrections to observation data 102 or content graph data 106 ne” (e.g., in real time or near real time) to enhance accuracy of atomized dataset generation (e.g., including triples) during the dataset ingestion and/or graph formation processes to form graph arrangement 160.
  • collaborative dataset consolidation system 150 may be configured to construct a repair graph including invalid or quarantined data to remediate the anomalous data for use in graph arrangement 160.
  • classifier 134a may be configured to identify and classify data as observation data 102, which may be linked to attribute data 104 and 105.
  • Data enrichment manager 134b may be configured to generate and aggregate content graph portions 106 to identify an entity.
  • Format converter 137 may be configured to convert any portion of parallelized data from data source 190 to graph-based data as observation data 102 and content graph data 106 at any time during ingestion, analyzation, identification, and deduplication of data.
  • Format converter 137 may be configured to generate other graph-based data, such as ancillary data or descriptor data (e.g., metadata) that may describe other attributes associated with each unit of observation data 102.
  • Ancillary or descriptor data can include data elements describing attributes of a unit of data, such as, for example, a label or annotation (e.g., header name) for a column, an index or column number, a data type associated with the data in a column, etc.
  • a unit of data may refer to data disposed at a particular row and column of a tabular arrangement.
  • FIG. 2 depicts an example of a dataset ingestion controller, according to some examples.
  • Diagram 200 depicts a dataset ingestion controller 230 including a data classifier 234a, which is shown to include a one or more state classifiers 244, a dataset enrichment manager 234b, and a format converter 237.
  • Dataset ingestion controller 230 may be configured to receive parallelized data 210 as well as enrichment data 247b from attribute correlator 342 of FIG. 3.
  • elements depicted in diagram 200 may include structures and/or functions as similarly- named or similarly-numbered elements depicted in other drawings.
  • One or more state classifiers 244a to 244n may be configured to determine a “state” or “class” associated with portions of data received as parallelized data 210 to determine a state, type, or class of observation data.
  • One or more state classifiers 244a to 244n may be configured to implement any number of statistical analytic programs, machine-learning applications, deeplearning applications, and the like.
  • state classifier 244a may include any number of predictive data modeling algorithms 290a to 290c that may be configured to perform pattern recognition and probabilistic data computations.
  • predictive data modeling algorithms 290a to 290c may apply “k-means clustering,” or any other clustering data identification techniques to form clustered sets of data that may be analyzed to determine or learn optimal classifications of observation data and associated attributes and supplemental data (e.g., metadata) related thereto.
  • k-means clustering or any other clustering data identification techniques to form clustered sets of data that may be analyzed to determine or learn optimal classifications of observation data and associated attributes and supplemental data (e.g., metadata) related thereto.
  • data classifier 234a and its components may be configured to detect patterns or classifications among datasets through the use of Bayesian networks, clustering analysis, as well as other known machine learning techniques or deeplearning techniques (e.g., including any known artificial intelligence techniques, or any of k-NN algorithms, linear support vector machine (“SVM”) algorithm, regression and variants thereof (e.g., linear regression, non-linear regression, etc.), Bayesian inferences and the like, including classification algorithms, such as Naive Bayes classifiers, or any other statistical, empirical, or
  • predictive data modeling algorithms 290a to 290c may include any algorithm configured to extract features and/or attributes based on classifying data or identifying patterns of data, as well as any other process to characterize subsets of data
  • predictive data model 290a may be configured to implement one of any type of neural networks (or any other predictive algorithm) as neural network model 290a, which may include a set of inputs 281 and any number of “hidden” or intermediate computational nodes 282 and 283, whereby one or more weights 287 may be implemented and adjusted (e.g., in response to training). Also shown, is a set of predicted outputs 284, such as terms defining a type of observation data. Predictive data model 290a may be configured to predict a class of “observation data,” whereby one or more of any output Al, . . . , Ax, Ay, . . .
  • An may represent a class of observation data, such as a “name” (e.g., a person’s name), an “address,” a “customer number,” a “date of birth,” an “email address,” a “telephone number,” or any other class of observation data that may be associated with attributes.
  • attributes may be input into inputs 281 to derive a class or type of observation data.
  • data representing an address and a name may be applied to inputs 281 to identify an identity of an entity, such as a unique identity of a person.
  • inputs into state classifier 244b may determine affinity data that may indicate a degree of affiliation with another entity.
  • predictive data modeling algorithms 291a to 291c may be configured to predict whether an individual entity (e.g., a unique person, a unique product, etc.) is associated or affiliated with another entity.
  • inputs into predictive data modeling algorithms 291a to 291c may be configured to predict whether multiple entities, such as multiple people belong to the same household (or as a living unit) or multiple products originate from a common retailer or manufacturer.
  • Output B 1 may indicate a relatively high probability of association (e.g., a familial relationship exists) and output B2 may indicate a relatively low probability of association (e.g., a familial relationship does not exist).
  • state classifier 244n may generate data representing characterizations of parallelized data 210, including metadata, to determine a “context” in which observation data and associated attributes are modeled.
  • a predicted context may facilitate enhanced accuracy in determining and resolving identities of entities.
  • Data outputs from state classifiers 244 and parallelized data 210 may be transmitted to dataset enrichment manager 234b, which may be configured to analyze ingested data relative to dataset-related data to determine correlations among dataset attributes of ingested data and other UdldStld 1VJU U1 ⁇ IG. 1 (and attributes, such as dataset metadata 103 a), as well as schema data 103c, ontology data 103d, and other sources of data.
  • dataset enrichment manager 234b may be configured to analyze ingested data relative to dataset-related data to determine correlations among dataset attributes of ingested data and other UdldStld 1VJU U1 ⁇ IG. 1 (and attributes, such as dataset metadata 103 a), as well as schema data 103c, ontology data 103d, and other sources of data.
  • data enrichment manager 234b may be configured to identify correlated datasets based on correlated attributes as determined, for example, by an attribute correlator and received as enrichment data 247b, which, in at least some cases, may include probabilistic or predictive data specifying, for example, classification of a data attribute or a link to other datasets to enrich a dataset.
  • the correlated attributes, as generated by an attribute correlator may facilitate the use of derived data or link-related data, as attributes, to associate, combine, join, or merge datasets to form collaborative datasets, such as enriched arrangement of graph data 260.
  • Enriched arrangement of graph data 260 may be implemented as a knowledge graph, at least in some examples.
  • dataset enrichment manager 234b is shown to include a content graph constructor 236 that is configured to form a content graph (or a portion thereof) that builds upon similar or equivalent units of observation data and subsets of attributes that may be correlatable (e.g., correlatable or matched to a threshold degree).
  • content graph constructor 236 may be configured to form a portion of graph data 260 (e.g., a sub-graph) based on correlated and deduplicated subsets of observation fingerprint data.
  • observation fingerprint data may include data representing one or more attributes such as a first name, a first initial, a last name, a residential address, an email address, a customer number, etc.
  • Enrichment data 247b may include data specifying matched or correlated attribute data with which to merge or cluster content graphs to form a comprehensive graph that includes data regarding an entity (e.g., a person, a product, a service, etc.).
  • content graph constructor 236 may be configured to cluster various units of observation data to form a cluster of data representing an identifiable entity.
  • Format converter 237 may be configured to convert data generated by data classifier 234a and dataset enrichment manager 236b into a graph-based data format. Also, format converter 237 may be configured to convert one or more of parallelized data 210, dataset-related data 247a, and enrichment data 247b into a graph-based data format compatible with enriched arrangement of graph data 260.
  • structures and/or functionalities depicted in FIG. 2 as well as other figures herein may be implemented as software, applications, executable code, application programming interfaces (“APIs”), processors, hardware, firmware, circuitry, or any combination thereof. u ⁇ picts an example of an attribute correlator, according to some examples.
  • Diagram 300 depicts an example of an attribute correlator 342 that may include a feature extraction controller 322, which may be configured to extract feature data to identify attributes. Attribute correlator 342 further may be configured to correlate attributes among any number of datasets, including attribute data associated with any number of units of observation data. As shown, attribute correlator 342 may receive dataset-related data 247a and parallelized data 310 to identify and correlate attributes, which, in turn, may be transmitted as enrichment data 247b to dataset enrichment manager 234b of FIG 2 to construct one or more content graphs. Referring back to FIG. 3, elements depicted in diagram 300 may include structures and/or functions as similarly-named or similarly-numbered elements depicted in other drawings.
  • Attribute correlator 342 may be configured to analyze data to detect patterns or data classifications that may resolve an issue, by “learning” or probabilistically predicting a dataset attribute through the use of Bayesian networks, clustering analysis, as well as other known machine learning techniques or deep-learning techniques, such as those described herein.
  • Feature extraction controller 322 may be configured to extract features as data representing correlated data 302 (e.g., as matched attribute data values).
  • feature extraction controller 322 may include any number of natural language processing (“NLP”) algorithms configured to correlate attribute data, such as matching or correlating names of entities (e.g., names of persons, products, services, etc.) to determine an identity of an entity.
  • NLP natural language processing
  • Natural language processor algorithms 321a to 321c may be configured, for example, to tokenize sentences and words, perform word stemming, filter out stop or irrelevant words, or implement any other natural language processing operation to determine text-related features to correlate attribute data, such as text data.
  • feature extraction controller 322 may include any number of predictive data modeling algorithms 390a to 390c that may be configured to perform pattern recognition and probabilistic data computations.
  • predictive data modeling algorithms 390a to 390c may apply “k-means clustering,” or any other clustering data identification techniques to form clustered sets of data that may be analyzed to determine or learn optimal correlation or matching of attribute data.
  • feature extraction controller 322 maybe configured to detect patterns or classifications among datasets through the use of Bayesian networks, clustering analysis, as well as other known machine learning techniques or deep-learning techniques (e.g., including any known artificial intelligence techniques, or any of k-NN algorithms, linear support vector machine (“SVM”) algorithm, anu valiants thereof (e.g., linear regression, non-linear regression, etc.), Bayesian inferences and the like, including classification algorithms, such as Naive Bayes classifiers, or any other statistical, empirical, or heuristic technique).
  • predictive data modeling algorithms 390a to 390c may include any algorithm configured to extract features and/or attributes based on identifying patterns of attribute data, as well as any other process to characterize subsets of data.
  • feature extraction controller 322 may be configured to implement any number of statistical analytic programs, machine-learning applications, deep-learning applications, and the like. Feature extraction controller 322 is shown to have access to any number of predictive models, such as predictive models 390a, 390b, and 390c, among others. As shown, predictive data model 390a may be configured to implement one of any type of neural networks to similar or equivalent data representations of attributes. For example, as predictive models 390a, 390b, and 390c may be configured to identity or match names associated with observation data, as well as matching addresses (or any other attribute) associated with a name to identify an individual entity.
  • a neural network model 390a may include a set of inputs 391 and any number of “hidden” or intermediate computational nodes 392, whereby one or more weights 397 may be implemented and adjusted (e.g., in response to training). Also shown is a set of predicted outputs 393, such as text terms defining a match among attribute values (e.g., matched names, matched addressed, matched SKUs, etc.), among any other types of outputs.
  • attribute values e.g., matched names, matched addressed, matched SKUs, etc.
  • Feature extraction controller 322 may include a neural network data model configured to predict (e.g., extract) contextual or related text terms based on generation of vectors (e.g., word vectors) with which to determine degrees of similarity (e.g., magnitudes of cosine similarity) to, for example, establish compatibility between attribute data (to indicate a degree of equivalency), at least in some examples.
  • feature extraction controller 322 may be configured to implement a “word2vec” natural language processing algorithm or any other natural language process that may or may not transform, for example, text data into numerical data (e.g., data representing a vector space).
  • feature extraction controller 322 may include algorithms configured to detect a degree of similarity between, for example, strings of texts to match names, addresses, etc.
  • feature extraction controller 322 may be configured to implement edit distance algorithms and/or phonetic encoding algorithms, among others, to identify matched attribute values.
  • feature extraction controller 322 may implement an algorithm configured to determine the Levenshtein Distance to calculate a difference between uaia strings of alphanumeric text (e.g., to determine similarity or equivalency).
  • a phonetic algorithm such as a Soundex algorithm, may be implemented to detect a degree to which text strings or alphanumeric text strings may be similar or equivalent.
  • a Jaro-Winkler distance algorithm may be implemented to detect a degree of equivalency between or among text strings.
  • feature extraction controller 322 may be configured to identify correlated data 302 and generate extracted feature data 303, which may include one or more groups of data units 371 to 374, whereby each group of data units 371 to 374 may be associated with a unit of observations data or a unit of attribute data, or both.
  • feature extraction controller 322 may be configured to identity correlatable data units 371 to 374 as attribute data that may match with other attribute data values.
  • data unit 371 may specify a “person” as a class of data indicative of a unit of predicted observation data 355a.
  • data units 372 to 374 may describe attribute values “name,” “address,” and “phone number” as entity attributes 355b.
  • attribute correlator 342 may be configured to generate electronic messages including correlated data 302 and extracted feature data 303 that may be transmitted as enrichment data 247b to a content graph constructor, such as depicted in FIG. 2.
  • a content graph constructor may be configured to aggregate or cluster observation data 355a with other instances of matched or correlatable units of observation data.
  • structures and/or functionalities depicted in FIG. 3 as well as other figures herein may be implemented as software, applications, executable code, application programming interfaces (“APIs”), processors, hardware, firmware, circuitry, or any combination thereof.
  • FIG. 4 illustrates an exemplary layered architecture for implementing a collaborative dataset consolidation system application, according to some examples.
  • Diagram 400 depicts application stack (“stack”) 401, which is neither a comprehensive nor a fully inclusive layered architecture to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data and to aggregate graph data portions, among other things.
  • stack application stack
  • One or more elements depicted in diagram 400 of FIG. 4 may include structures and/or functions as similarly- named or similarly-numbered elements depicted in other drawings, or as otherwise described herein, in accordance with one or more examples, such as described relative to any figure or description herein.
  • Application stack 401 may include a collaborative dataset consolidation system application layer 450 upon application layer 440, which, in turn, may be disposed upon any number of lower layers (e.g., layers 403a to 403d).
  • Collaborative dataset consolidation system ap nvauuii lay ci 450 may be configured to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data and to aggregate graph data portions, as described herein.
  • collaborative dataset consolidation system application layer 450 and application layer 440 may be disposed on data exchange layer 403d, which may implemented using any programming language, such as HTML, JSON, XML, etc., or any other format to effect generation and communication of requests and responses among computing devices and computational resources constituting an enterprise, an entity, and/or a platform configured to correlate data and information expeditiously, such as information regarding products or services aligned with data in targeted data sources compatible with data integration.
  • Data exchange layer 403d may be disposed on a service layer 403c, which may provide a transfer protocol or architecture for exchanging data among networked applications.
  • service layer 403c may provide for a RESTful-compliant architecture and attendant web services to facilitate GET, PUT, POST, DELETE, and other methods or operations.
  • service layer 403c may provide, as an example, SOAP web services based on remote procedure calls (“RPCs”), or any other like services or protocols (e.g., APIs, such as REST APIs, etc.).
  • RPCs remote procedure calls
  • Service layer 403c may be disposed on a transport layer 403b, which may include protocols to provide host-to-host communications for applications via an HTTP or HTTPS protocol, in at least this example.
  • Transport layer 403b may be disposed on a network layer 403a, which, in at least this example, may include TCP/IP protocols and the like.
  • collaborative dataset consolidation system application layer 450 may include (or may be layered upon) an application layer 440 that includes logic constituting a data catalog application layer 441, which is optional and referenced in FIG. 13, a knowledge graph application layer 442, a dataset ingestion controller layer 424, and a dataset attribute manager layer 426.
  • layers 424, 426, 441, 442, and 450 may include logic to implement the various functionalities described herein.
  • any of the described layers of FIG. 4 or any other processes described herein in relation to other figures may be implemented as software, hardware, firmware, circuitry, or a combination thereof.
  • the described techniques may be implemented using various types of programming, development, scripting, or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques, including, but not limited to, PythonTM, ASP, ASP.net, .Net framework, Ruby, Ruby on Rails, C, Objective C, C++, C#, Adobe® Integrated RuntimeTM (Adobe® AIRTM), ActionScriptTM, FlexTM, LingoTM, JavaTM, JSON, JavascriptTM, Ajax, Perl, COBOL, Fortran, ADA, XML, MXML, HTML, DHTML, / mviL, ⁇ ⁇ , XMPP, PHP, and others, including SQLTM, SPARQLTM, TurtleTM, etc., as well as any proprietary application and software provided or developed by data.world, Inc., of Austin Texas, U.
  • FIG. 5 is a flow diagram as an example of analyzing parallelized data formatted as graphbased data with correlated data attributes to consolidate observation data to form content graph portions, according to some embodiments.
  • Flow 500 is an example of aggregating or consolidating observation data to identify individual entities and supersets of individual entities in accordance with various examples described herein.
  • parallelized data may be received from multiple disparate data sources in a computing system that includes multiple processors configured to facilitate massively parallel processing to extract attribute data to analyze and correlate data values associated with various units of attribute data in parallel.
  • raw data that may be ingested as parallelized data may be modified to remediate (e.g., “clean” and “prepare”) ingested data to be formatted or referenced as graph-based data in an arrangement of graph data, such as a knowledge graph. Remediation of data may be performed in accordance with rules or a set of predictively compliant data thresholds to identify valid data.
  • data representing a state of Texas may be modified or normalized to reflect an alternative representation of TX.
  • data issues can be detected and corrected based on a lexical structure. Examples may include trimming quotes and leading/trailing whitespace(s), and correcting field misuse errors, such as modifying data fields first name and last name to correct for an invalid last name. For example, data fields ⁇ firstName: “John Smith” and lastName: “'OCCUPANT” ⁇ may be modified to reflect data fields ⁇ firstName: “'John” and lastName: “Smith” ⁇ , thereby removing “occupant” as an erroneous last name of an entity.
  • data representing a subset of parallelized data may be classified to identify observation data, such as a class or type of observation data.
  • data that may not comply with rules e.g., based on conformance to an ontology, semantic-defined data, a data dictionary, a glossary, etc.
  • rules e.g., based on conformance to an ontology, semantic-defined data, a data dictionary, a glossary, etc.
  • noncompliant data or outlier data may be quarantined for further processing to, for example, generate a repair graph with which to integrate subsequently with a data arrangement on an enriched graph.
  • data records of individual entities may be quarantined if data includes (1) an absence of a first name and a last name, (2) addresses that do not conform postal standards (e.g., invalid zip codes and state identifiers), (3) email addresses that do not conform to an “id@domain.tld” format, (4) that do not conform with a country’s standards (e.g., a U.S. phone number that does not include 10 digits), (5) blacklisted data references, and (6) other non-compliant data. Further at 504, overly-matched or overly-correlated data may be deemed as outliers that may complicate resolution of an identity of an entity, and thus may be quarantined or discarded.
  • postal standards e.g., invalid zip codes and state identifiers
  • residential addresses, email addresses, phone number and customer numbers that are linked to a large number of observation fingerprints, such as 1,000 instances, may be refer to an organization or a group of individual entities.
  • data may be of little or negligible value to determine an identity of a specific entity, such as an individual person, and may be quarantined or discarded (e.g., into a repair graph to receive subsequent processing to determine whether the associated data may be included to enrich an arrangement of graph data).
  • one or more content graphs in a graph data format may be constructed based on, for example, a class of observation data and one or more entity attributes, such as a name, an address, and other attribute data.
  • entity attributes such as a name, an address, and other attribute data.
  • a unit of observation data may be referred to as “observation fingerprint,” which may be an electronically digital fingerprint, or a portion thereof, associated with an entity.
  • data representing one or more entity attributes associated with observation data may be identified based on any number of sources.
  • a predictive data classifier or an attribute correlator may be configured to identify a set of terms (e.g., in a data dictionary) with which to search parallelized data or converted graph-based data.
  • Identified attribute data may be linked or otherwise associated with a unit of observation data.
  • a subset of parallelized data may be correlated to other subsets of the parallelized data associated with a class or unit of observation data to form correlated subsets of parallelized data.
  • each data value associated with a corresponding attribute may be used to correlate, match, or otherwise detect equivalent units of attribute data associated with other units of observation data (e.g., other digital observation fingerprints).
  • Data values of attribute data may be matched against other data values of other attribute data by an attribute correlator configured to identify and correlate patterns of data.
  • correlation of subsets of parallelized data or associated attribute data values may include forming adjacency nodes linked to units of attributes in a content graph (e.g., a sub-graph).
  • An adjacency node may be linked together to cluster or aggregate similar or equivalent attribute data values that may constitute or relate to an individual entity.
  • An adjacency node may be a portion of constructed content graph that connects or links an attribute (e.g., a name) and other indicator values (e.g., other attribute values) to capture data relationships comprehensively.
  • jiz, une or more units of observation data (and correlated attribute data) may be classified as an individual entity.
  • multiple units of observation data may be clustered together to form an enriched content graph that describes an individual entity, such as a person, a product, a service, or any other entity.
  • correlated subsets of parallelized data may be clustered to identify an individual entity using data representing multiple adjacency node data linked together.
  • data representing multiple individual entities may be aggregated or clustered to form a set of entities based on correlated subsets of parallelized data.
  • An individual entity may be aggregated or clustered with data representing other individual entities to form a group of clustered individual entities.
  • multiple individual entities may represent multiple persons having a familial relationship or a common geographic location (e.g., a household, a living unit, or a common residential address at which the entities reside).
  • flow 500 may be configured to modify a graph data arrangement to enrich data stored in association with, for example, a knowledge graph.
  • FIG. 6 is a flow diagram as an example of forming an ancillary graph to remediate data for integration into an enriched arrangement of graph data, according to some embodiments.
  • Flow 600 may be initiated at 602, at which an ancillary graph may be constructed, whereby the ancillary graph may be referred to as a repair graph.
  • noncompliant data or outlier data as described herein, may be quarantined in a repair graph for subsequent analysis to determine whether such data may be validated for inclusion into an enriched arrangement of graph data (e.g., a knowledge graph).
  • data representing an ancillary graph or repair graph may be analyzed automatically to generate data representing an electronic report describing and characterizing noncompliant and outlier data with suggestions to resolve and validate such data.
  • remedial actions may be generated automatically (e.g., at a collaborative dataset consolidation system) to transmute noncompliant and outlier data into valid data, which may be included in a knowledge graph.
  • FIG. 8 depicts an example of a portion of a content graph, according to some examples.
  • Diagram 800 depicts attribute data 804 and 806 being associated with a unit of observation data 802.
  • Attribute data 804 may represent name data, and may have attribute data values of 810 (“John”) and 812 (“Smith”).
  • Attribute data 806 may represent address data, and may include attribute values of 814 (“123 Main Street”), 816 (city of “Sometown”), and 818 (state of Texas, or “TX”).
  • a unit of observation data 802 may also be linked to other attribute data, such as attribute data value 822 (e.g., an email address of “john.smith@somewhere.com”) and attribute data value 824 (e.g., a customer number or identifier “ABCD1234-5678DEF0”).
  • attribute data value 822 e.g., an email address of “john.smith@somewhere.com”
  • attribute data value 824 e.g., a customer number or identifier “ABCD1234-5678DEF0”.
  • a dataset analyzer and/ or an attribute correlator may be configured to correlate or match address nodes that may be used to resolve an identity of an entity, whereby some node data may be used to infer whether a subset of addresses may be semantically equivalent or similar.
  • attribute data values 804, 806, 822, and 824 may be used to match or correlate with other equivalent attribute data values to determine whether unit of observation data 802 may represent an identity of an entity as does other units of observation data.
  • FIGs. 9A and 9B depict examples of correlating attribute data values to construct a content graph portion, at least in some examples.
  • Diagram 900 of FIG. 9A depicts a first unit of attribute data 902 (“attribute X”) as being linked to attribute data values 814, 816, and 818, whereas a second unit of attribute data 904 (“attribute Y”) may include attribute data values 814, 818, and 910 (zip code of “78007”).
  • a dataset analyzer and/ or an attribute correlator may be configured to correlate attribute data 902 and 904 to form a data relationship or link 930 indicating attribute data 902 and 904 refer to an equivalent geographic location.
  • FIG. 9B depicts a first unit of attribute data 902 associated with attribute data values 814, 816, and 818.
  • diagram 950 depicts a second unit of attribute data 944 (“attribute N”) that may include attribute data values 954, 956, and 968.
  • a dataset analyzer and/ or an attribute correlator may be configured to determine that an individual entity (e.g., a person) or a group of individual entities (e.g., a family) have been associated with attribute data 902 during a first period of time, but may be associated with attribute data 944 during a second period of time after moving from one geographic location to another geographic location.
  • Diagram 1000 depicts a first unit of observation data 1002, as an observation digital fingerprint, that may include entity attribute data 1012 (e.g., entity Al may be data representing a name) having attribute data values 1001.
  • entity attribute data 1012 e.g., entity Al may be data representing a name
  • entity attribute data 1014 e.g., entity Bl may be data representing an address
  • entity attribute data 1022 e.g., entity A2 may be data representing a name
  • the second unit of observation data 1022 may also include entity attribute data 1024 (e.g., entity B2 may be data representing an address), which associated attribute data values (shown as bracketed ellipses).
  • entity attribute data 1024 e.g., entity B2 may be data representing an address
  • a dataset analyzer and/ or an attribute correlator may be configured to correlate or match entity attribute data 1014 and 1024 via link 1050, and may be further configured to establish data representing adjacency nodes 1016 and 1026 that may be associated together via link 1040. As shown, adjacency node 1016 may be linked to entity data 1012 and 1014, and adjacency node 1026 may be linked to entity data 1022 and 1024.
  • Diagram 1050 depicts aggregation or clustering of observation data 1002 and 1004.
  • a dataset analyzer and/ or an attribute correlator may be configured to form an association 1080 to specify that digital fingerprints of units of observation data 1002 and 1004 may resolve to represent an individual entity, such as a uniquely-identifiable person.
  • FIGs. 11A and 11B depict examples of clustering units of observation data of content graph portions to identify an individual entity, at least in some examples.
  • Diagram 1100 of FIG. 11A depicts several units of observation data 1102, 1104, and 1106 associated with entity attribute data values 1112, 1114, and 1116, respectively.
  • a dataset analyzer and/ or an attribute correlator may be configured to form associations among digital fingerprints representing observation data 1102, 1104, and 1106 to aggregate or clustering the associated data together to form links 1110 to establish data representing an individual entity 1101.
  • observation data 1102, 1104, and 1106 may be clustered to resolve an identity of an individual, such as a specific person, specific product, specific service, and the like.
  • Diagram 1150 of FIG. 1 IB depicts identification of attribute data values via links 1157, 1158, and 1159 associated with 1101 that may more comprehensively describe individual entity 1101 than a single unit of observation data.
  • individual entity 1101 may include attribute data values 1162, 1164a, and 1166a based on aggregation or clustering of units of observation data 1102, 1104, and 1106.
  • FIG. 12 depicts an example of data representing aggregating individual entities to form aggregated data, according to some examples.
  • Diagram 1200 illustrates clustering or aggregation of individual entities 1202, 1204, and 1206 to represent a hierarchical data relationship as aggregated data 1201.
  • individual entities 1202, 1204, and 1206 may represent three unique individuals who live together (e.g., as a family).
  • aggregated data 1201 may represent the group of individual entities as, for example, a family.
  • FIG. 13 depicts an example of a data catalog and a knowledge graph implement as a cloud-based service, according to some examples.
  • Diagram 1300 depicts a computing system accessible via a network 1360 (e.g., the Internet) and application programming interfaces (“APIs”) 1322 (or other data connectors).
  • a computing system may include hardware (e.g., processors) and software to implement a collaborative dataset consolidation system 1350.
  • a user 1394 at a remote computing device 1394b may access a collaborative dataset consolidation system 1350 as a cloud-based service.
  • Collaborative dataset consolidation system 1350 of FIG. 13 may include a dataset ingestion controller 1330 and a dataset attribute manager 1341 to exchange dataset-related data 1347a and enrichment data 1347b.
  • Diagram 1300 also depicts collaborative dataset consolidation system 1350 including or being configured to access data associated with data catalog controller logic 1352 to access, manage, and use a data catalog (e.g., an enterprise data catalog).
  • Collaborative dataset consolation system 1350 may include or may be configured to access data associated with knowledge graph controller logic 1356 to access, manage, and use a knowledge graph data arrangement 1342.
  • knowledge graph data arrangement 1342 may be implemented as a knowledge graph-as-a-service (e.g., “KGaas”).
  • knowledge graph data arrangement 1342 may interact electronically with data catalog controller logic 1352 to form a network of concepts and semantic relationships describing data and metadata associated with the knowledge graph.
  • knowledge graph data arrangement 1342 may be configured to integrate knowledge, information, and data at a relatively large scale as a graph data mode, whereby knowledge graph data arrangement 1342 may include nodes representing tables, columns, dashboards, reports, business terms, users, etc.
  • uiiauui au ve dataset consolation system 1350 may access data locally or remotely at any data source, such as data sources 1302 that may be accessible via APIs 1320.
  • dataset metadata 1303a e.g., descriptor data or information specifying dataset attributes
  • dataset data 1303b e.g., reference data stored locally or remotely to access data in any local or remote data storage, such as data in data sources 1302
  • schema data 1303c e.g., sources, such as schema.org, that may provide various types and vocabularies, glossaries, data dictionaries, and the like
  • ontology data 1303d from any suitable ontology and any other suitable types of data source.
  • Elements depicted in diagram 1300 may include structures and/or functions as similarly-named or similarly-numbered elements depicted in other drawings.
  • Collaborative dataset consolation system 1350 may include a data project controller 1331 may be configured to provision and control a data project interface (not shown) as a computerized tools, or as controls for implementing computerized tools to procure, generate, manipulate, and share datasets, as well as to share query results and insights (e.g., conclusions or subsidiary conclusions) among any number of collaborative computing systems (and collaborative users of system 1350).
  • a data project controller 1331 may be configured to provision and control a data project interface (not shown) as a computerized tools, or as controls for implementing computerized tools to procure, generate, manipulate, and share datasets, as well as to share query results and insights (e.g., conclusions or subsidiary conclusions) among any number of collaborative computing systems (and collaborative users of system 1350).
  • data project controller 1331 may be configured to provide computerized tools (or access thereto) to establish a data project, as well as invite collaboration and provide real-time (or near real-time) information as to insights to data analysis (e.g., conclusions) relating to a dataset or data project, as well as a data dictionary or glossary that may constitute at least a portion of data catalog 1355.
  • Data project controller 1331 may be configured to identify a potential resolution, aim, goal, or hypothesis through, for example, application one or more queries against a dataset (e.g., canonical dataset).
  • external computerized analysis tools include external statistical and visualization applications, such as Tableau®, that may be accessible as external data and visualization logic 1380.
  • data project controller 1331 may be configured to access, manage, build, and use data representing a data dictionary 1353 (e.g., a composite data dictionary), which may be managed electronically by data catalog controller logic 1352.
  • data representing data dictionary 1353 may be a subset of data representing a data catalog 1355.
  • data catalog 1355 may be disposed in a cloud-based computing system, data catalog may be referred to as a “data catalog-as-a-service,” at least in some examples.
  • dataset query engine 1339 may be configured to access data associated with knowledge graph data arrangement 1342 as atomized datasets that may be formed as triples compliant with an RDF specification. Further, knowledge graph data arrangement 1342 may be stored in one or more repositories, at least one of which may be a database storage device formed as a “triplestore.” Note that in some cases, data referenced to IMlUWlLllgL ia ii data arrangement 1342 may also be of any data format, such as CSV, JSON, XML, XLS, MySQL, binary, RDF, or other similar or suitable data formats.
  • FIG. 14 depicts an example of a system architecture configured to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data based on aggregated or clustered content graph data portions, according to an example.
  • Data constituting executable instructions (e.g., remote applications) and other content, such as text, video, audio, etc. may be stored in (or exchanged with) various communication channels or storage devices.
  • various units of message data or content may be stored using one or more of a web application 1424 (e.g., a public data source, such as a news aggregation web site), an email application service 1426, an electronic messaging application 1428 (e.g., a texting or messenger application), social networking services 1430 and a services platform and repository 1432 (e.g., cloud computing services provided by Google® cloud platform, an AWS® directory service provided by Amazon Web Services, Inc., or any other platform service).
  • a server 1415 may implement a collaborative dataset consolidation system application 1450 to perform various functionalities as described herein.
  • server 1415 may be a web server providing the applications 1450 via networks 1410.
  • a client computing device may be implemented and/or embodied in a computer device 1405, a mobile computing device 1406 (e.g., a smart phone), a wearable computing device 1407, or other computing device. Any of these client computing devices 1405 to 1407 may be configured to transmit electronic messages and content (e.g., as electronic text or documents, video content, audio content, or the like) from data store 1416, and may be configured to receive content (e.g., other electronic content), whereby collaborative dataset consolidation system application 1450 may be configured to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data based on aggregated or clustered content graph data portions.
  • collaborative dataset consolidation system application 1450 may be configured to correlate subsets of parallelized data from disparately-formatted data sources to identify entity data based on aggregated or clustered content graph data portions.
  • computing platform 1500 or any portion can be disposed in any device, such as a computing device 1590a, mobile computing device 1590b, and/or a processing circuit in association with initiating any of the functionalities described herein, via user interfaces and user interface elements, according to various examples.
  • Processor 1504 can be implemented as one or more graphics processing units (“GPUs”), as one or more central processing units (“CPUs”), such as those manufactured by Intel® Corporation, or as one or more virtual processors, as well as any combination of CPUs and virtual processors.
  • Computing platform 1500 exchanges data representing inputs and outputs via input-and-output devices 1501, including, but not limited to, keyboards, mice, audio inputs (e.g., speech-to-text driven devices), user interfaces, displays, monitors, cursors, touch-sensitive displays, touch- sensitive input and outputs (e.g., touch pads), LCD or LED displays, and other I/O-related devices.
  • input-and-output devices 1501 including, but not limited to, keyboards, mice, audio inputs (e.g., speech-to-text driven devices), user interfaces, displays, monitors, cursors, touch-sensitive displays, touch- sensitive input and outputs (e.g., touch pads), LCD or LED displays, and other I/O-related devices.
  • input-and-output devices 1501 may be implemented as, or otherwise substituted with, a user interface in a computing device associated with, for example, a user account identifier in accordance with the various examples described herein.
  • computing platform 1500 performs specific operations by processor 1504 executing one or more sequences of one or more instructions stored in system memory 1506, and computing platform 1500 can be implemented in a client-server arrangement, peer-to-peer arrangement, or as any mobile computing device, including smart phones and the like. Such instructions or data may be read into system memory 1506 from another computer readable medium, such as storage device 1508. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation. Instructions may be embedded in software or firmware.
  • computer readable medium refers to any tangible medium that participates in providing instructions to processor 1504 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media.
  • Non-volatile media includes, for example, optical or magnetic disks and the like.
  • Volatile media includes dynamic memory, such as system memory 1506.
  • 1 ⁇ 11UV'V11 1U1 ms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can access data. Instructions may further be transmitted or received using a transmission medium.
  • transmission medium may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions.
  • Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 1502 for transmitting a computer data signal.
  • system memory 1506 can include various modules that include executable instructions to implement functionalities described herein.
  • System memory 1506 may include an operating system (“O/S”) 1532, as well as an application 1536 and/or logic module(s) 1559.
  • system memory 1506 may include any number of modules 1559, any of which, or one or more portions of which, can be configured to facilitate any one or more components of a computing system (e.g., a client computing system, a server computing system, etc.) by implementing one or more functions described herein.
  • any of the above-described features can be implemented in software, hardware, firmware, circuitry, or a combination thereof.
  • the structures and constituent elements above, as well as their functionality may be aggregated with one or more other structures or elements.
  • the elements and their functionality may be subdivided into constituent sub-elements, if any.
  • the above-described ina 1 ut implemented using various types of programming or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques. These can be varied and are not limited to the examples or descriptions provided.
  • modules 1559 of FIG. 15, or one or more of their components, or any process or device described herein can be in communication (e.g., wired or wirelessly) with a mobile device, such as a mobile phone or computing device, or can be disposed therein.
  • a mobile device such as a mobile phone or computing device
  • a mobile device in communication with one or more modules 1559 or one or more of its/their components (or any process or device described herein), can provide at least some of the structures and/or functions of any of the features described herein.
  • the structures and/or functions of any of the above-described features can be implemented in software, hardware, firmware, circuitry, or any combination thereof. Note that the structures and constituent elements above, as well as their functionality, may be aggregated or combined with one or more other structures or elements. Alternatively, the elements and their functionality may be subdivided into constituent sub-elements, if any.
  • At least some of the abovedescribed techniques may be implemented using various types of programming or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques.
  • at least one of the elements depicted in any of the figures can represent one or more algorithms.
  • at least one of the elements can represent a portion of logic including a portion of hardware configured to provide constituent structures and/or functionalities.
  • modules 1559 or one or more of its/their components, or any process or device described herein can be implemented in one or more computing devices (i.e., any mobile computing device, such as a wearable device, such as a hat or headband, or mobile phone, whether worn or carried) that include one or more processors configured to execute one or more algorithms in memory.
  • computing devices i.e., any mobile computing device, such as a wearable device, such as a hat or headband, or mobile phone, whether worn or carried
  • processors configured to execute one or more algorithms in memory.
  • at least some of the elements in the above-described figures can represent one or more algorithms.
  • at least one of the elements can represent a portion of logic including a portion of hardware configured to provide constituent structures and/or functionalities. These can be varied and are not limited to the examples or descriptions provided.
  • modules 1559 or one or more of its/their components, or any process or device described herein can be implemented in one or more computing devices that include one or more circuits.
  • at least one of the elements in the above-described figures can represent one or more components of hardware.
  • at least one of the elements can represent a portion of logic including a portion of a circuit configured to provide constituent structures and/or functionalities.
  • the term “circuit” can refer, for example, to any system including a number of components through which current flows to perform one or more functions, the components including discrete and complex components.
  • discrete components include transistors, resistors, capacitors, inductors, diodes, and the like
  • complex components include memory, processors, analog circuits, digital circuits, and the like, including field-programmable gate arrays (“FPGAs”), application-specific integrated circuits (“ASICs”). Therefore, a circuit can include a system of electronic components and logic components (e.g., logic configured to execute instructions, such that a group of executable instructions of an algorithm, for example, and, thus, is a component of a circuit).
  • logic components e.g., logic configured to execute instructions, such that a group of executable instructions of an algorithm, for example, and, thus, is a component of a circuit.
  • the term “module” can refer, for example, to an algorithm or a portion thereof, and/or logic implemented in either hardware circuitry or software, or a combination thereof (i.e., a module can be implemented as a circuit).
  • algorithms and/or the memory in which the algorithms are stored are “components” of a circuit.
  • circuit can also refer, for example, to a system of components, including algorithms. These can be varied and are not limited to the examples or descriptions provided.

Abstract

Selon divers modes de réalisation, l'invention concerne de manière générale la science des données et l'analyse de données, des logiciels et des systèmes informatiques, et des systèmes et des algorithmes de commande dirigés vers des données basés sur des agencements de données basés sur des graphes, entre autres, et, plus précisément, une plateforme informatique configurée pour recevoir ou pour analyser des ensembles de données en parallèle par mise en œuvre, par exemple, des systèmes de processeurs informatiques parallèles pour corréler des sous-ensembles de données parallélisées à partir de sources de données formatées de façon disparate afin d'identifier des données d'entité et pour agréger des parties de données de graphe. Dans certains exemples, un procédé peut comprendre la classification de données parallélisées de données pour identifier une classe de données d'observation, la construction d'un ou de plusieurs graphes de contenu dans un format de données de graphe, la mise en corrélation des données parallélisées avec d'autres sous-ensembles de données parallélisées associées à une classe de données d'observation ; et l'agrégation de données d'observation pour représenter une entité individuelle.
PCT/US2022/042077 2021-08-30 2022-08-30 Corrélation de données parallélisées provenant de sources de données disparates pour agréger des parties de données de graphes afin d'identifier de manière prédictive des données d'entité WO2023034328A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/461,982 US11755602B2 (en) 2016-06-19 2021-08-30 Correlating parallelized data from disparate data sources to aggregate graph data portions to predictively identify entity data
US17/461,982 2021-08-30

Publications (2)

Publication Number Publication Date
WO2023034328A2 true WO2023034328A2 (fr) 2023-03-09
WO2023034328A3 WO2023034328A3 (fr) 2023-04-13

Family

ID=85413050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/042077 WO2023034328A2 (fr) 2021-08-30 2022-08-30 Corrélation de données parallélisées provenant de sources de données disparates pour agréger des parties de données de graphes afin d'identifier de manière prédictive des données d'entité

Country Status (1)

Country Link
WO (1) WO2023034328A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11816118B2 (en) 2016-06-19 2023-11-14 Data.World, Inc. Collaborative dataset consolidation via distributed computer networks
US11888910B1 (en) * 2022-09-15 2024-01-30 Neptyne Inc System to provide a joint spreadsheet and electronic notebook interface

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020029207A1 (en) * 2000-02-28 2002-03-07 Hyperroll, Inc. Data aggregation server for managing a multi-dimensional database and database management system having data aggregation server integrated therein
US7680765B2 (en) * 2006-12-27 2010-03-16 Microsoft Corporation Iterate-aggregate query parallelization
US20120011144A1 (en) * 2010-07-12 2012-01-12 Frederik Transier Aggregation in parallel computation environments with shared memory
US9507682B2 (en) * 2012-11-16 2016-11-29 Ab Initio Technology Llc Dynamic graph performance monitoring
US9477733B2 (en) * 2013-03-15 2016-10-25 Uda, Lld Hierarchical, parallel models for extracting in real-time high-value information from data streams and system and method for creation of same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11816118B2 (en) 2016-06-19 2023-11-14 Data.World, Inc. Collaborative dataset consolidation via distributed computer networks
US11888910B1 (en) * 2022-09-15 2024-01-30 Neptyne Inc System to provide a joint spreadsheet and electronic notebook interface

Also Published As

Publication number Publication date
WO2023034328A3 (fr) 2023-04-13

Similar Documents

Publication Publication Date Title
US11947529B2 (en) Generating and analyzing a data model to identify relevant data catalog data derived from graph-based data arrangements to perform an action
US11557276B2 (en) Ontology integration for document summarization
Umer et al. Sentiment analysis of tweets using a unified convolutional neural network‐long short‐term memory network model
Kanavos et al. Large scale implementations for twitter sentiment classification
Li et al. Mining opinion summarizations using convolutional neural networks in Chinese microblogging systems
Basarslan et al. Sentiment analysis with machine learning methods on social media
US11755602B2 (en) Correlating parallelized data from disparate data sources to aggregate graph data portions to predictively identify entity data
Wu et al. Structured microblog sentiment classification via social context regularization
Santhoshkumar et al. Earlier detection of rumors in online social networks using certainty-factor-based convolutional neural networks
Choi et al. Dynamic graph convolutional networks with attention mechanism for rumor detection on social media
WO2023034328A2 (fr) Corrélation de données parallélisées provenant de sources de données disparates pour agréger des parties de données de graphes afin d'identifier de manière prédictive des données d'entité
US20220365993A1 (en) Classifying relevance of natural language text for topic-based notifications
US20230214949A1 (en) Generating issue graphs for analyzing policymaker and organizational interconnectedness
US20230214753A1 (en) Generating issue graphs for analyzing organizational influence
Gupta et al. Real-time tweet analytics using hybrid hashtags on twitter big data streams
Dritsas et al. An apache spark implementation for graph-based hashtag sentiment classification on twitter
Sharma et al. Sarcasm detection over social media platforms using hybrid auto-encoder-based model
Rani et al. Online social networking services and spam detection approaches in opinion mining-a review
Ke et al. Rumor detection on social media via fused semantic information and a propagation heterogeneous graph
Alothali et al. Bot-mgat: A transfer learning model based on a multi-view graph attention network to detect social bots
Bai et al. Rumor detection based on a source-replies conversation tree convolutional neural net
Li et al. EventKGE: Event knowledge graph embedding with event causal transfer
Mewari et al. Opinion mining techniques on social media data
Yenkikar et al. Sentimlbench: Benchmark evaluation of machine learning algorithms for sentiment analysis
kumar Mall et al. Self-Attentive CNN+ BERT: An Approach for Analysis of Sentiment on Movie Reviews Using Word Embedding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22865449

Country of ref document: EP

Kind code of ref document: A2