US20200110590A1 - Techniques for configuring and validating a data pipeline deployment - Google Patents

Techniques for configuring and validating a data pipeline deployment Download PDF

Info

Publication number
US20200110590A1
US20200110590A1 US16/706,094 US201916706094A US2020110590A1 US 20200110590 A1 US20200110590 A1 US 20200110590A1 US 201916706094 A US201916706094 A US 201916706094A US 2020110590 A1 US2020110590 A1 US 2020110590A1
Authority
US
United States
Prior art keywords
job
data processing
template
data
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/706,094
Inventor
David Lisuk
Paul Gribelyuk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Palantir Technologies Inc
Original Assignee
Palantir Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Palantir Technologies Inc filed Critical Palantir Technologies Inc
Priority to US16/706,094 priority Critical patent/US20200110590A1/en
Publication of US20200110590A1 publication Critical patent/US20200110590A1/en
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Palantir Technologies Inc.
Assigned to WELLS FARGO BANK, N.A. reassignment WELLS FARGO BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Palantir Technologies Inc.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/43Checking; Contextual analysis
    • G06F8/433Dependency analysis; Data or control flow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present disclosure relates to data pipeline systems. More specifically, the disclosure relates to configuring, validating, and/or deploying a data pipeline system.
  • a data pipeline system is a series of jobs that each take data as input, apply business logic to the data, and output the results, typically to another job in the pipeline for further processing.
  • a data pipeline system can be complex, requiring many interdependent jobs. Configuring a data pipeline system can be time-consuming, as it requires customizing each job in the data pipeline system. Such customization can require manual programming or implementation of each job in a programming language.
  • different data pipeline system deployments rely on a subset of similar jobs. For example, deduplication of data records can be implemented in one or more jobs. Deduplication is often needed across various data pipeline system deployments. Likewise, configuration of a machine learning system can be implemented in one or more jobs and is often needed across multiple data pipeline system deployments. What is needed is a way to easily configure a data pipeline system and reuse common jobs across data pipeline system deployments.
  • FIG. 1 is an example of a deployed data pipeline system, according to one embodiment.
  • FIG. 2 is an example of a deployment system, according to one embodiment.
  • FIG. 3 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 4 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 5 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 6 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 7 is an example of a flow diagram for a process of deploying a data pipeline system, according to one embodiment.
  • FIG. 8 is a block diagram of a computing device in which the example embodiment(s) of the present invention may be embodied.
  • FIG. 9 is a block diagram of a software system for controlling the operation of the computing device.
  • a template is a file or data object that describes a package of related jobs.
  • a template may describe a set of jobs necessary for deduplication of data records or a set of jobs performing machine learning on a set of data records.
  • the template can be defined in a file, such as a JSON blob or XML file.
  • the template may identify a set of dataset dependencies that are needed as input for the processing of that job.
  • the template may further identify a set of configuration parameters needed for deployment of the job.
  • the template may be used to generate and display a graphical user interface (GUI) for receiving values for the configuration parameters of each job.
  • GUI graphical user interface
  • the template may further identify code for processing the job, such as a particular class or function.
  • the GUI may be used to run a validation process on the underlying data to ensure accuracy of the entered configuration parameter values.
  • the output of the validation result may be displayed via the GUI.
  • the GUI can be used to submit the data pipeline system for deployment.
  • a server uses the template and the configuration parameter values collected via the GUI to generate code for the package of jobs.
  • the code may be stored in a version control system.
  • the code may be compiled, executed, and deployed to a server for processing the data.
  • a data pipeline system is a series of jobs that each take data as input, apply business logic to the data, and output the results. The results can be used as input to one or more jobs further downstream in the data pipeline system.
  • a data pipeline system can be complex, requiring many interdependent jobs.
  • data pipeline systems are frequently used for different data sets.
  • data pipeline systems may be used for deduplication of data, machine learning using data, joining disparate data sources together, data type conversions, data transformations, and/or data cleanup.
  • These data pipeline systems are merely exemplary, and other frequently-used data pipelines may exist for common data processing tasks.
  • FIG. 1 illustrates an example deployed data pipeline system 100 for a machine learning implementation, according to one embodiment.
  • the data pipeline system 100 includes jobs 110 , 120 , 130 , and 140 .
  • Deployed data pipeline system 100 is intended to be illustrative for showing the present techniques for one particular example of a frequently-used data pipeline for machine learning, but the present techniques can be applied to any frequently-used data pipeline systems.
  • Job 110 is programmed or configured to read input data from a one or more data sources.
  • data may be read from a database, a file system, or some other data source.
  • the results of job 110 are then sent to job 120 .
  • Job 120 is programmed or configured to featurize the input data received from job 110 so that it is suitable for use in a machine learning model. Featurization may include certain data cleanup tasks, normalization of data, and/or transformation of the input data into another format necessary for machine learning. The featurized data of job 120 are then sent to job 130 .
  • Job 130 is programmed or configured to train a machine learning model using the featurized data.
  • job 130 may train classifier logic on the featurized data.
  • the classifier logic may be implemented as programs that execute one of various known types of classifiers, including a logistic regression classifier, a linear support vector machine classifier, a random forest classifier, a nearest neighbor classifier, a Bayesian classifier, a perceptron, or a neural network.
  • the result of the training of job 130 is a machine learning model that is then sent to job 140 .
  • Job 140 is programmed or configured to take the machine learning model from job 130 and apply it to newly received test data to generate a score.
  • the example of deployed data pipeline system 100 is an exemplary data pipeline system that may be used frequently for various machine learning application areas. Deploying such a data pipeline system can be time-consuming and prone to user error if it needs to be performed manually from scratch for every application area. Simplification of the deployment of a data pipeline system, such as data pipeline system 100 , can improve system efficiency as well as improve the speed of deployment of data pipelines for new application areas.
  • FIG. 2 illustrates an example of a deployment system 200 in which the techniques described herein may be practiced, according to some embodiments.
  • deployment system 200 is programmed or configured to use a template of a commonly deployed data pipeline system to assist a user in efficiently configuring, validating, and/or deploying a new data pipeline system based on the template.
  • Deployment system 200 may be implemented across one or more physical or virtual computing devices, none of which is intended as a generic computer, since it is loaded with instructions in a new ordered combination as otherwise disclosed herein to implement the functions and algorithms of this disclosure.
  • Deployment system 200 illustrates only one of many possible arrangements of components configured to execute the programming described herein. Other arrangements may include fewer or different components, and the division of work between the components may vary depending on the arrangement.
  • deployment system 200 includes template engine 210 .
  • Template engine 210 is programmed or configured to receive a template and one or more configuration parameter values for a data pipeline system.
  • the template engine 210 can use the template and configuration parameter values to configure, validate, and/or deploy the data pipeline system. Further details regarding the template engine 210 will be discussed herein.
  • Template engine 210 is communicatively coupled to template library 220 , repository 250 , and graphical user interface (GUI) 230 .
  • GUI graphical user interface
  • deployment system 200 includes template library 220 .
  • Template library 220 stores a library of templates 222 A though 222 N that each store preconfigured settings for commonly deployed data pipeline systems. Further details regarding templates 222 will be discussed herein.
  • deployment system 200 includes GUI 230 .
  • GUI 230 is programmed or configured to receive one or more configuration parameter values received from computing device 240 for use in the deployment of a data pipeline system.
  • GUI 230 is further programmed or configured to monitor or view the status of a deployed pipeline system in production environment 270 . Further details regarding GUI 230 will be discussed herein.
  • deployment system 200 includes repository 250 .
  • Repository 250 is programmed or configured to receive and commit code received from the template engine 210 for a data pipeline system. Further details regarding repository 250 will be discussed herein.
  • Repository 250 is communicatively coupled to pipeline deployment service 260 .
  • deployment system 200 includes pipeline deployment service 260 .
  • Pipeline deployment service 260 is programmed or configured to retrieve the committed source code from repository 250 , execute it, and deploy the data pipeline system to a production environment 270 . Further details regarding pipeline deployment service 260 will be discussed herein.
  • the present techniques provide improvements in the configuration, validation, and deployment of data pipeline systems. By unifying frequently-used data pipeline systems into templates, the deployment system is able to remove manual error in the configuration and deployment of data pipeline systems.
  • the present techniques improve computational efficiency by minimizing inefficient implementation of frequently-used data pipeline systems and replacing them with templatized versions of the data pipeline systems that incorporate best practices and that can be customized as necessary by a user for a particular deployment.
  • a template is stored digital data that identifies one or more job definitions for a particular data pipeline system.
  • a template can be implemented in any markup language or data format syntax, such as extensible markup language (XML), “YAML Ain't Markup Language” (YAML), or JavaScript Object Notation (JSON), and is stored in the form of digital data in a storage device or digital memory.
  • XML extensible markup language
  • YAML YAML Ain't Markup Language
  • JSON JavaScript Object Notation
  • One or more templates 222 A through 222 N may be stored in template library 220 .
  • Each template 222 A through 222 N may cover a frequently-used data pipeline system.
  • different templates may exist for frequent data processing tasks such as data deduplication, data cleanup, data extraction, machine learning training and classifying, joining disparate data sources together, and other frequent data processing tasks.
  • a job definition is a set of computer-implemented instructions that can be used for creating, executing, and/or implementing a data processing job in a data pipeline system.
  • the execution of a data processing job may generate one or more output datasets. These output datasets may in turn be used as input datasets in further data processing jobs.
  • a job definition may include a code identifier that identifies code for processing the data processing job.
  • the code identifier may be a method call, a function call, a pointer, a data object, a library, an executable file, a script, a macro, or some other identification of a set of programming instructions for performing the data processing job.
  • a job definition may include one or more dataset dependency identifiers that each identify an input dataset for the particular data processing job.
  • the dataset dependency identifiers may be used to determine how the particular data processing job is dependent on one or more additional data processing jobs or the output datasets of one or more additional data processing jobs.
  • a dataset dependency identifier for the job definition of the featurize input data job 120 may identify the input dataset generated by read input data job 110 . This dataset dependency identifier thus establishes that featurize input data job 120 is dependent on read input data job 110 .
  • the example of data pipeline system 100 shows a serial set of dependencies for the various jobs 110 , 120 , 130 , and 140 , in another embodiment, there may be multiple dependencies, non-serial dependencies, or another configuration of dependencies.
  • a job definition may include one or more configuration parameters for processing a job.
  • Configuration parameters are settings necessary for processing the job.
  • the value of a configuration parameter may be hard coded in the job definition.
  • the value of a configuration parameter may be received via GUI 230 , as will be discussed herein.
  • a template may include one or more global configuration parameters for deploying the data pipeline system.
  • the value of a configuration parameter may be hard coded in the template.
  • the value of a configuration parameter may be received via GUI 230 , as will be discussed herein.
  • Table 1 displays an example of an excerpt of a template for performing machine learning.
  • Table 1 illustrates an example template for data pipeline system 100 , according to one embodiment.
  • the template of Table 1 is written in JSON, but another markup language or syntax may be used in other embodiments.
  • the template of Table 1 includes job definitions for four jobs: read_input_data, featurize, train_model, and apply_model which correspond to jobs 110 , 120 , 130 , and 140 respectively.
  • Each of the job definitions provides configuration parameters for processing jobs 110 , 120 , 130 , and 140 .
  • Table 1 includes a “root” tag that identifies the root directory where the data pipeline system will be deployed.
  • the “root” tag is an example of a global configuration parameter to be used across multiple data processing jobs.
  • the double curly brackets in this example illustrates that the root parameter is a value that will be provided by a user via GUI 230 instead of hard coded, however, in other embodiments, different syntax may be used.
  • Table 1 includes a featurize job definition.
  • the job definition for the featurize job includes a “transaction_type” tag.
  • the transaction_type defines how the output of the job is going to be used.
  • a SNAPSHOT value for the transaction_type indicates that the output of the job will be a newly generated dataset.
  • An APPEND value for the transaction_type indicates that the output of the job will be appended to an existing dataset (not depicted in Table 1).
  • the job definition for the featurize job includes a “path” tag.
  • the path tag defines where the dataset that is going to be generated by the job is going to be output. In the example of the featurize job, the output of the job will be in a path in a subdirectory “/Featurize” under the root directory.
  • the job definition for the featurize job in Table 1 includes a “code_identifier” tag.
  • the code_identifier tag defines the target code that needs to be executed in order to process the job.
  • the class “Featurize” can be used for processing the job.
  • the code identifier includes a “parameters” tag that identifies one or more configuration parameters for executing the code.
  • the configuration parameters for the featurize job includes a “featurized_columns” configuration parameter and a “featurizers” configuration parameter.
  • the curly brackets for these configuration parameters indicates that the values for these configuration parameters will be provided by a user via GUI 230 .
  • the values of configuration parameters may be hard coded in the job definition itself.
  • the job definition for the featurize job in Table 1 includes a “dependencies” tag.
  • the dependencies tag identifies one or more input datasets for the featurize job.
  • the featurize job takes as an input the dataset generated by the “read_input_data” job defined earlier in the template.
  • the featurize job will use as input the output of the read_input_data job.
  • the dependencies information thus describes how the different jobs in a template are interrelated and/or dependent on one another. Although the example illustrated here only identifies a single dependency, in other embodiments, multiple dependencies may exist.
  • the job definition for the featurize job in Table 1 includes a “schema” tag.
  • the schema tag identifies a schema for an output dataset for the job. In this case, no schema is specified for the job.
  • a schema may identify various characteristics of the output dataset, such as data types, expected values, column names, etc.
  • a template thus allows the deployment system 200 to use out-of-the-box configurations for frequently-used pipelines, while still allowing customization of specific configuration parameters by a user. This results efficiency and ease of deployment of a pipeline.
  • GUI Graphical User Interface
  • GUI 230 may be accessible by one or more computing devices 240 and will allow users to interact with the configuration, validation, and deployment of a data pipeline system.
  • template engine 210 can use a template 222 to generate a GUI 230 . Template engine 210 can then receive configuration parameter values received via GUI 230 .
  • FIG. 3 illustrates an example user interface 300 of GUI 230 for initial configuration of a data pipeline.
  • user interface 300 may include one or more user inputs 310 and 320 for receiving configuration values for global configuration parameters.
  • user input 310 is used to receive a value for a configuration parameter that defines the pipeline root directory.
  • User input 320 is used to receive a value for a configuration parameter that identifies which pipeline template or pipeline templates should be used during configuration.
  • the user input 320 may include a dropdown or selection of available templates 222 .
  • Template engine 210 may retrieve the list of available templates 222 from template library 220 and display the list of available templates 222 via user input 320 to allow a user to easily select a template from available templates.
  • User interface 300 includes a user input 330 to proceed to the next step in the pipeline configuration.
  • FIG. 4 illustrates an example user interface 400 of GUI 230 for configuration of a particular data processing job.
  • User interface 400 may be displayed after the user selected user input 330 in FIG. 3 .
  • the example of user interface 400 corresponds to the job definition for the read_input_data job described above with reference to Table 1.
  • User input 410 allows a user to select which job in the data pipeline system they want to view. In this particular case, the user input 410 is set to view the read_input_data job.
  • the values of the user input 410 may include a list of one or more of the job definitions in the template that was selected with reference to user interface 300 .
  • User interface 400 may include a display 420 that shows the input datasets for the particular data processing job.
  • the input datasets are the dataset dependencies that are defined in the template for the particular data processing job.
  • the value of display 420 is empty.
  • User interface 400 may include a help display 430 that provides a user with information regarding the data processing job.
  • the information displayed in help display 430 may be defined in the job definition of the template.
  • User interface 400 may include a set of configuration parameters 440 .
  • configuration parameters 440 includes a user input 450 for receiving a value for the file_path parameter.
  • the contents of configuration parameters 440 may be displayed based on the job definition in the template. For example, Table 1 shows that the file_path parameter is a parameter for the read_input_data job, and the value needs to be provided by a user.
  • User interface 400 may include a user input 460 to perform validation of the currently selected data processing job. For example, by selecting user input 460 , a user could attempt to validate the configuration parameters and other settings of the read_input_data job. Further details regarding job validation will be described herein.
  • User interface 400 may include a user input 470 to initiate the deployment of the data pipeline system by template engine 210 . Further details regarding the deployment of the data pipeline system by template engine 210 will be described herein.
  • FIG. 5 illustrates an example user interface 500 of GUI 230 for configuration of a particular data processing job.
  • the example of user interface 500 corresponds to the job definition for the featurize job described above with reference to Table 1.
  • User input 510 allows a user to select which job in the data pipeline system they want to view. In this particular case, the user input 510 is set to view the featurize job. For example, the user may have navigated to user interface 500 by selecting the “featurize” option from user input 410 in the prior user interface 400 .
  • User interface 500 may include a display 520 that shows the input datasets for the particular data processing job.
  • display 520 shows that read_input_data is an input dataset. This allows a user to quickly see the interrelated dependencies amongst datasets and/or jobs in the data pipeline system.
  • User interface 500 may include a help display 530 that provides a user with information regarding the data processing job.
  • the information displayed in help display 530 may be defined in the job definition of the template.
  • User interface 500 may include a set of configuration parameters 540 .
  • configuration parameters 540 includes a user input 550 for receiving a value for the featurized_columns parameter.
  • Configuration parameters 540 includes a user input 552 for receiving a value for the featurizers parameter.
  • the contents of configuration parameters 540 may be displayed based on the job definition in the template. For example, Table 1 shows that featurized_columns and featurizers are parameters provided by a user for the featurize job.
  • User interface 500 may include a user input 560 to perform validation of the currently selected data processing job. For example, by selecting user input 560 , a user could attempt to validate the configuration parameters and other settings of the featurize job. Further details regarding job validation will be described herein.
  • User interface 500 may include a user input 570 to initiate the deployment of the data pipeline system by template engine 210 . Further details regarding the deployment of the data pipeline system by template engine 210 will be described herein.
  • GUI 230 may also be communicatively coupled to production environment 270 and may be used to view the status and health of the deployed pipeline system in production environment 270 . Further details regarding viewing the status and health of a pipeline may be found in U.S. Pat. No. 9,678,850 (“Data Pipeline Monitoring”), which is incorporated by reference as if fully set forth herein.
  • template engine 220 is programmed or configured to use a template 222 for a data pipeline system in combination with one or more configuration parameter values received via GUI 230 to configure, validate, and generate a set of code for deployment of the data pipeline system.
  • template engine 210 may cause to be displayed in GUI 230 one or more user interfaces for receiving configuration parameter values, such as the user interfaces displayed earlier with respect FIGS. 3, 4, and 5 .
  • template engine 210 can send the appropriate code to repository 250 for storage.
  • template engine 210 can retrieve or execute the relevant computer code identified by the code identifier in the job definition. Template engine 210 can then use the retrieved computer code, the dataset dependencies, the configuration parameters, the values of configuration parameters received from the GUI 230 , to prepare a set of jobs defined by the template for inclusion in a data pipeline system. In an embodiment, preparation of the set of jobs may include copying the relevant code, parameters, values, templates, and datasets and storing them in a repository 250 .
  • Repository 250 is programmed or configured to serve as an archive for storing, managing, and accessing computer code and other digital data.
  • repository 250 may be programmed or configured to allow for the checking in, checking out, committing, merging, branching, forking, or other management of computer code and other digital data.
  • Computer code can be in any programming language, including, but not limited to Java, Structured Query Language (SQL), Python, Scala, etc.
  • Repository 250 may be programmed or configured to provide version control for source code files.
  • repository 250 may be accessible via a web interface and/or a command line interface.
  • repository 250 may be implemented as a GIT repository.
  • Pipeline deployment service 260 is programmed or configured to retrieve computer code from repository 250 and deploy it to a production environment 270 .
  • pipeline deployment service 260 is programmed or configured to compile and build the computer code, using the template, into a set of executable code, such as a JAR file, SQL file, executable file (.EXE), library, plugin, or any other form of executable code.
  • the computer code may be compiled and built into multiple sets of executable code. For example, each job definition in the template may correspond to one or more sets of executable code.
  • pipeline deployment service 260 may be implemented as a Gradle build system. The end result may be the deployment of the data pipeline system specified in the template into production environment 270 .
  • deployment system 200 may also perform validation of data processing job configurations during setup. For example, if a user selects user inputs 460 or 560 , the template engine 210 can execute validation of the data processing job based on the provided configuration parameter values.
  • FIG. 6 illustrates an example of a validation screen for the featurize job after a user selects user input 560 .
  • Selection of user input 560 causes template engine 210 to use the provided configuration parameter values as well as the selected template, and attempts to perform validation of the data processing job based on those settings.
  • the selected source code in the job definition of the template may include one or more functions, methods, or other sequences of instructions for validation of the data processing job.
  • the sequences of instructions for validation can be used to generate validation results 610 .
  • the sequence of instructions for validation may cause the data processing job, as well as any dataset dependencies or jobs, to be compiled, built, executed, and to apply specific validation criteria with the given parameter values.
  • validation criteria may be stored and/or defined in one or more of templates 222 A through 222 N. In another embodiment, validation criteria may be stored and/or defined in template engine 210 . In another embodiment, validation criteria may be stored and/or defined in another data store coupled to template engine 210 (not depicted). In an embodiment, validation may be performed in a development environment separate from production environment 270 so that validation does not compromise production performance.
  • Validation results 610 shows the results of application of various validation criteria to the data processing job with the given configuration parameters and template.
  • Each validation criteria may refer to a specific sequence of instructions to apply to the particular data processing job.
  • the value of the validation criteria may indicate the return result for sequence of instructions.
  • featurization_exception the validation logic found one exception when attempting to featurize the data provided by the configuration parameters.
  • the lower bound and upper bound may specify limits of acceptable values for the various validation criteria.
  • the “Is Valid?” field can show whether or not the value observed for a validation criteria is within the expected bounds.
  • Validation results 610 thus allows a user to quickly and easily troubleshoot the configuration of a data processing job early during configuration instead of waiting to deploy the data processing job in a production environment.
  • User input 630 allows a user to navigate back to the user interface 500 for the featurize job.
  • FIG. 7 illustrates a process 700 of configuring a data pipeline system for deployment.
  • FIG. 7 is described with reference to deployment system 200 , but other embodiments may implement or execute the process 700 using other computer systems.
  • FIG. 7 and each other flow diagram in this disclosure, is intended to illustrate an algorithm that can be used as the basis of programming an implementation of one or more of the claims that are set forth herein, using digital computers and a programming language or development environment, and is illustrated and described at the level at which skilled persons, in the field to which this disclosure is directed, are accustomed to communicating with one another to identify or describe programs, methods, objects and the like that can provide a working system.
  • template engine 210 is programmed or configured to retrieve a list of available templates 222 A through 222 N stored in template library 210 .
  • Each template 222 may provide configuration parameters for a plurality of data processing jobs in a frequently-used data pipeline system. The process 700 may then proceed to step 720 .
  • step 720 template engine 210 is programmed or configured to display the list of available templates 222 A through 222 N in GUI 230 .
  • GUI 230 thus allows a user accessing a computing device 240 to easily view which data pipeline systems are frequently used, making it easier and more efficient to deploy a frequently used data pipeline system.
  • the process 700 may then proceed to step 730 .
  • template engine 210 receives a user input selecting a template for configuration and deployment via GUI 230 .
  • the user input may be provided by computing device 240 .
  • Template engine 210 retrieves the template 222 selected from template library 720 .
  • Template 222 includes a plurality of job definitions for a particular data pipeline system. The process 700 may then proceed to step 740 .
  • template engine 210 uses template 222 to display on GUI 230 one or more user interfaces for receiving configuration parameter values for the data pipeline system and its jobs. Examples of such user interfaces include user interfaces for receiving global configuration parameter values, as in user interface 300 , and for receiving global configuration parameter values for specific jobs, as in user interfaces 400 and 500 . Template engine 210 uses the various values specified in the template 222 to determine what information to display in the user interfaces, including what fields require user input. The process 700 may then proceed to step 750 .
  • template engine 210 receives a plurality of configuration parameter values for the data pipeline system and jobs via GUI 230 .
  • the configuration parameter values may be provided via user inputs.
  • configuration parameter values may be received for global configuration parameters and/or for one or more of each job definition in the template.
  • the configuration parameter values may be stored. The process 700 may then proceed to step 760 .
  • template engine 210 uses the template 222 and the various configuration parameter values stored in step 750 to prepare code for the deployment of the data pipeline system.
  • the template 222 will include one or more code identifiers for every job, which specifies a set of computing instructions, such as a method, function, script, or other sequence that can be used for the particular job.
  • Template engine 210 will retrieve the appropriate code from a source code repository (not depicted), using the template and the configuration parameter values and prepare a set of code for each of the jobs in the data pipeline system. The process 700 may then proceed to step 770 .
  • step 770 template engine 210 will store the necessary template, configuration parameter values, prepared code, dataset dependencies, and other necessary digital data in repository 250 .
  • the process 700 may then proceed to step 780 .
  • pipeline deployment service 260 will retrieve the data stored in repository 250 in step 770 . Pipeline deployment service 260 will then compile, build, and deploy the code to production environment 270 .
  • FIG. 8 it is a block diagram that illustrates a computing device 800 in which the example embodiment(s) of the present invention may be embodied.
  • Computing device 800 and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s).
  • Other computing devices suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions.
  • Computing device 800 may include a bus 802 or other communication mechanism for addressing main memory 806 and for transferring data between and among the various components of device 800 .
  • Computing device 800 may also include one or more hardware processors 804 coupled with bus 802 for processing information.
  • a hardware processor 804 may be a general purpose microprocessor, a system on a chip (SoC), or other processor.
  • Main memory 806 such as a random access memory (RAM) or other dynamic storage device, also may be coupled to bus 802 for storing information and software instructions to be executed by processor(s) 804 .
  • Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by processor(s) 804 .
  • Software instructions when stored in storage media accessible to processor(s) 804 , render computing device 800 into a special-purpose computing device that is customized to perform the operations specified in the software instructions.
  • the terms “software”, “software instructions”, “computer program”, “computer-executable instructions”, and “processor-executable instructions” are to be broadly construed to cover any machine-readable information, whether or not human-readable, for instructing a computing device to perform specific operations, and including, but not limited to, application software, desktop applications, scripts, binaries, operating systems, device drivers, boot loaders, shells, utilities, system software, JAVASCRIPT, web pages, web applications, plugins, embedded software, microcode, compilers, debuggers, interpreters, virtual machines, linkers, and text editors.
  • Computing device 800 also may include read only memory (ROM) 808 or other static storage device coupled to bus 802 for storing static information and software instructions for processor(s) 804 .
  • ROM read only memory
  • static storage device coupled to bus 802 for storing static information and software instructions for processor(s) 804 .
  • One or more mass storage devices 810 may be coupled to bus 802 for persistently storing information and software instructions on fixed or removable media, such as magnetic, optical, solid-state, magnetic-optical, flash memory, or any other available mass storage technology.
  • the mass storage may be shared on a network, or it may be dedicated mass storage.
  • at least one of the mass storage devices 810 (e.g., the main hard disk for the device) stores a body of program and data for directing operation of the computing device, including an operating system, user application programs, driver and other support files, as well as other data files of all sorts.
  • Computing device 800 may be coupled via bus 802 to display 812 , such as a liquid crystal display (LCD) or other electronic visual display, for displaying information to a computer user.
  • display 812 such as a liquid crystal display (LCD) or other electronic visual display, for displaying information to a computer user.
  • a touch sensitive surface incorporating touch detection technology e.g., resistive, capacitive, etc.
  • touch detection technology may be overlaid on display 812 to form a touch sensitive display for communicating touch gesture (e.g., finger or stylus) input to processor(s) 804 .
  • An input device 814 may be coupled to bus 802 for communicating information and command selections to processor 804 .
  • input device 814 may include one or more physical buttons or switches such as, for example, a power (on/off) button, a “home” button, volume control buttons, or the like.
  • cursor control 816 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • computing device 800 in response to processor(s) 804 executing one or more programs of software instructions contained in main memory 806 .
  • Such software instructions may be read into main memory 806 from another storage medium, such as storage device(s) 810 . Execution of the software instructions contained in main memory 806 cause processor(s) 804 to perform the functions of the example embodiment(s).
  • computing device 800 e.g., an ASIC, a FPGA, or the like
  • computing device 800 e.g., an ASIC, a FPGA, or the like
  • Non-volatile media includes, for example, non-volatile random access memory (NVRAM), flash memory, optical disks, magnetic disks, or solid-state drives, such as storage device 810 .
  • Volatile media includes dynamic memory, such as main memory 806 .
  • storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, flash memory, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802 .
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • the software instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer.
  • the remote computer can load the software instructions into its dynamic memory and send the software instructions over a telephone line using a modem.
  • a modem local to computing device 800 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 802 .
  • Bus 802 carries the data to main memory 806 , from which processor(s) 804 retrieves and executes the software instructions.
  • the software instructions received by main memory 806 may optionally be stored on storage device(s) 810 either before or after execution by processor(s) 804 .
  • Computing device 800 also may include one or more communication interface(s) 818 coupled to bus 802 .
  • a communication interface 818 provides a two-way data communication coupling to a wired or wireless network link 820 that is connected to a local network 822 (e.g., Ethernet network, Wireless Local Area Network, cellular phone network, Bluetooth wireless network, or the like).
  • Communication interface 818 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
  • communication interface 818 may be a wired network interface card, a wireless network interface card with an integrated radio antenna, or a modem (e.g., ISDN, DSL, or cable modem).
  • Network link(s) 820 typically provide data communication through one or more networks to other data devices.
  • a network link 820 may provide a connection through a local network 822 to a host computer 824 or to data equipment operated by an Internet Service Provider (ISP) 826 .
  • ISP 826 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 828 .
  • Internet 828 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link(s) 820 and through communication interface(s) 818 , which carry the digital data to and from computing device 800 are example forms of transmission media.
  • Computing device 800 can send messages and receive data, including program code, through the network(s), network link(s) 820 and communication interface(s) 818 .
  • a server 830 might transmit a requested code for an application program through Internet 828 , ISP 826 , local network(s) 822 and communication interface(s) 818 .
  • the received code may be executed by processor 804 as it is received, and/or stored in storage device 810 , or other non-volatile storage for later execution.
  • FIG. 9 is a block diagram of a software system 900 that may be employed for controlling the operation of computing device 800 .
  • Software system 900 and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s).
  • Other software systems suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions.
  • Software system 900 is provided for directing the operation of computing device 800 .
  • Software system 900 which may be stored in system memory (RAM) 806 and on fixed storage (e.g., hard disk or flash memory) 810 , includes a kernel or operating system (OS) 910 .
  • RAM system memory
  • OS operating system
  • the OS 910 manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file input and output (I/O), and device I/O.
  • One or more application programs represented as 902 A, 902 B, 902 C . . . 902 N, may be “loaded” (e.g., transferred from fixed storage 810 into memory 806 ) for execution by the system 900 .
  • the applications or other software intended for use on device 900 may also be stored as a set of downloadable computer-executable instructions, for example, for downloading and installation from an Internet location (e.g., a Web server, an app store, or other online service).
  • Software system 900 includes a graphical user interface (GUI) 915 , for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the system 900 in accordance with instructions from operating system 910 and/or application(s) 902 .
  • the GUI 915 also serves to display the results of operation from the OS 910 and application(s) 902 , whereupon the user may supply additional inputs or terminate the session (e.g., log off).
  • OS 910 can execute directly on the bare hardware 920 (e.g., processor(s) 804 ) of device 800 .
  • a hypervisor or virtual machine monitor (VMM) 930 may be interposed between the bare hardware 920 and the OS 910 .
  • VMM 930 acts as a software “cushion” or virtualization layer between the OS 910 and the bare hardware 920 of the device 800 .
  • VMM 930 instantiates and runs one or more virtual machine instances (“guest machines”). Each guest machine comprises a “guest” operating system, such as OS 910 , and one or more applications, such as application(s) 902 , designed to execute on the guest operating system.
  • the VMM 930 presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems.
  • the VMM 930 may allow a guest operating system to run as if it is running on the bare hardware 920 of device 800 directly. In these instances, the same version of the guest operating system configured to execute on the bare hardware 920 directly may also execute on VMM 930 without modification or reconfiguration. In other words, VMM 930 may provide full hardware and CPU virtualization to a guest operating system in some instances.
  • a guest operating system may be specially designed or configured to execute on VMM 930 for efficiency.
  • the guest operating system is “aware” that it executes on a virtual machine monitor.
  • VMM 930 may provide para-virtualization to a guest operating system in some instances.
  • the above-described computer hardware and software is presented for purpose of illustrating the underlying computer components that may be employed for implementing the example embodiment(s).
  • the example embodiment(s), however, are not necessarily limited to any particular computing environment or computing device configuration. Instead, the example embodiment(s) may be implemented in any type of system architecture or processing environment that one skilled in the art, in light of this disclosure, would understand as capable of supporting the features and functions of the example embodiment(s) presented herein.

Abstract

Techniques for configuring and validating a data pipeline system deployment are described. In an embodiment, a template is a file or data object that describes a package of related jobs. For example, a template may describe a set of jobs necessary for deduplication of data records or a set of jobs performing machine learning on a set of data records. The template can be defined in a file, such as a JSON blob or XML file. For each job specified in the template, the template may identify a set of dataset dependencies that are needed as input for the processing of that job. For each job specified in the template, the template may further identify a set of configuration parameters needed for deployment of the job. In an embodiment, a server uses the template and the configuration parameter values collected via the GUI to generate code for the package of jobs. The code may be stored in a version control system. In an embodiment, the code may be compiled, executed, and deployed to a server for processing the data.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS; BENEFIT CLAIM
  • This application is a continuation of U.S. patent application Ser. No. 15/977,666, filed May 11, 2018, the entire contents of which is hereby incorporated by reference as if fully set forth herein, under 35 U.S.C. § 120; which claims priority to U.S. Provisional Patent Application No. 62/527,988, filed Jun. 30, 2017.
  • TECHNICAL FIELD
  • The present disclosure relates to data pipeline systems. More specifically, the disclosure relates to configuring, validating, and/or deploying a data pipeline system.
  • BACKGROUND
  • The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
  • A data pipeline system is a series of jobs that each take data as input, apply business logic to the data, and output the results, typically to another job in the pipeline for further processing. A data pipeline system can be complex, requiring many interdependent jobs. Configuring a data pipeline system can be time-consuming, as it requires customizing each job in the data pipeline system. Such customization can require manual programming or implementation of each job in a programming language. However, oftentimes different data pipeline system deployments rely on a subset of similar jobs. For example, deduplication of data records can be implemented in one or more jobs. Deduplication is often needed across various data pipeline system deployments. Likewise, configuration of a machine learning system can be implemented in one or more jobs and is often needed across multiple data pipeline system deployments. What is needed is a way to easily configure a data pipeline system and reuse common jobs across data pipeline system deployments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The example embodiment(s) of the present invention are illustrated by way of example, and not in way by limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
  • FIG. 1 is an example of a deployed data pipeline system, according to one embodiment.
  • FIG. 2 is an example of a deployment system, according to one embodiment.
  • FIG. 3 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 4 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 5 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 6 is an example of a user interface of a deployment system, according to one embodiment.
  • FIG. 7 is an example of a flow diagram for a process of deploying a data pipeline system, according to one embodiment.
  • FIG. 8 is a block diagram of a computing device in which the example embodiment(s) of the present invention may be embodied.
  • FIG. 9 is a block diagram of a software system for controlling the operation of the computing device.
  • While each of the figures illustrates a particular embodiment for purposes of illustrating a clear example, other embodiments may omit, add to, reorder, and/or modify any of the elements shown in the figures.
  • DESCRIPTION OF THE EXAMPLE EMBODIMENT(S)
  • In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the example embodiment(s) of the present invention. It will be apparent, however, that the example embodiment(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the example embodiment(s).
  • 1.0 GENERAL OVERVIEW
  • 2.0 EXAMPLE COMPUTER SYSTEM IMPLEMENTATION
  • 2.1 TEMPLATES
  • 2.2 GRAPHICAL USER INTERFACE (GUI)
  • 2.3 DEPLOYMENT OF DATA PIPELINE SYSTEM
  • 2.4 VALIDATION
  • 3.0 EXAMPLE PROCESS AND ALGORITHM
  • 4.0 IMPLEMENTATION MECHANISMS—HARDWARE OVERVIEW
  • 5.0 IMPLEMENTATION MECHANISMS—SOFTWARE OVERVIEW
  • 6.0 OTHER ASPECTS OF DISCLOSURE
  • 1.0 General Overview
  • Techniques for configuring and validating a data pipeline system deployment are described. In an embodiment, a template is a file or data object that describes a package of related jobs. For example, a template may describe a set of jobs necessary for deduplication of data records or a set of jobs performing machine learning on a set of data records. The template can be defined in a file, such as a JSON blob or XML file. For each job specified in the template, the template may identify a set of dataset dependencies that are needed as input for the processing of that job. For each job specified in the template, the template may further identify a set of configuration parameters needed for deployment of the job. The template may be used to generate and display a graphical user interface (GUI) for receiving values for the configuration parameters of each job. For each job specified in the template, the template may further identify code for processing the job, such as a particular class or function. In an embodiment, the GUI may be used to run a validation process on the underlying data to ensure accuracy of the entered configuration parameter values. The output of the validation result may be displayed via the GUI. Once finished, the GUI can be used to submit the data pipeline system for deployment. In an embodiment, a server uses the template and the configuration parameter values collected via the GUI to generate code for the package of jobs. The code may be stored in a version control system. In an embodiment, the code may be compiled, executed, and deployed to a server for processing the data.
  • 2.0 Example Computer System Implementation
  • A data pipeline system is a series of jobs that each take data as input, apply business logic to the data, and output the results. The results can be used as input to one or more jobs further downstream in the data pipeline system. A data pipeline system can be complex, requiring many interdependent jobs.
  • In the context of large scale data analytics, certain types of data pipeline systems are frequently used for different data sets. For example, data pipeline systems may be used for deduplication of data, machine learning using data, joining disparate data sources together, data type conversions, data transformations, and/or data cleanup. These data pipeline systems are merely exemplary, and other frequently-used data pipelines may exist for common data processing tasks.
  • FIG. 1 illustrates an example deployed data pipeline system 100 for a machine learning implementation, according to one embodiment. The data pipeline system 100 includes jobs 110, 120, 130, and 140. Deployed data pipeline system 100 is intended to be illustrative for showing the present techniques for one particular example of a frequently-used data pipeline for machine learning, but the present techniques can be applied to any frequently-used data pipeline systems.
  • Job 110 is programmed or configured to read input data from a one or more data sources. For example, data may be read from a database, a file system, or some other data source. The results of job 110 are then sent to job 120.
  • Job 120 is programmed or configured to featurize the input data received from job 110 so that it is suitable for use in a machine learning model. Featurization may include certain data cleanup tasks, normalization of data, and/or transformation of the input data into another format necessary for machine learning. The featurized data of job 120 are then sent to job 130.
  • Job 130 is programmed or configured to train a machine learning model using the featurized data. For example job 130 may train classifier logic on the featurized data. The classifier logic may be implemented as programs that execute one of various known types of classifiers, including a logistic regression classifier, a linear support vector machine classifier, a random forest classifier, a nearest neighbor classifier, a Bayesian classifier, a perceptron, or a neural network. The result of the training of job 130 is a machine learning model that is then sent to job 140.
  • Job 140 is programmed or configured to take the machine learning model from job 130 and apply it to newly received test data to generate a score.
  • The example of deployed data pipeline system 100 is an exemplary data pipeline system that may be used frequently for various machine learning application areas. Deploying such a data pipeline system can be time-consuming and prone to user error if it needs to be performed manually from scratch for every application area. Simplification of the deployment of a data pipeline system, such as data pipeline system 100, can improve system efficiency as well as improve the speed of deployment of data pipelines for new application areas.
  • FIG. 2 illustrates an example of a deployment system 200 in which the techniques described herein may be practiced, according to some embodiments. In the example of FIG. 2, deployment system 200 is programmed or configured to use a template of a commonly deployed data pipeline system to assist a user in efficiently configuring, validating, and/or deploying a new data pipeline system based on the template. Deployment system 200 may be implemented across one or more physical or virtual computing devices, none of which is intended as a generic computer, since it is loaded with instructions in a new ordered combination as otherwise disclosed herein to implement the functions and algorithms of this disclosure. The example components of deployment system 200 shown in FIG. 2 are implemented at least partially by hardware at one or more computing devices, such as one or more hardware processors executing stored program instructions stored in one or more memories for performing the functions that are described herein. Or, one or more virtual machine instances in a shared computing facility such as a cloud computing center may be used. The functions described herein are intended to indicate operations that are performed using programming in a special-purpose computer or general-purpose computer, in various embodiments. Deployment system 200 illustrates only one of many possible arrangements of components configured to execute the programming described herein. Other arrangements may include fewer or different components, and the division of work between the components may vary depending on the arrangement.
  • In an embodiment, deployment system 200 includes template engine 210. Template engine 210 is programmed or configured to receive a template and one or more configuration parameter values for a data pipeline system. The template engine 210 can use the template and configuration parameter values to configure, validate, and/or deploy the data pipeline system. Further details regarding the template engine 210 will be discussed herein. Template engine 210 is communicatively coupled to template library 220, repository 250, and graphical user interface (GUI) 230.
  • In an embodiment, deployment system 200 includes template library 220. Template library 220 stores a library of templates 222A though 222N that each store preconfigured settings for commonly deployed data pipeline systems. Further details regarding templates 222 will be discussed herein.
  • In an embodiment, deployment system 200 includes GUI 230. GUI 230 is programmed or configured to receive one or more configuration parameter values received from computing device 240 for use in the deployment of a data pipeline system. GUI 230 is further programmed or configured to monitor or view the status of a deployed pipeline system in production environment 270. Further details regarding GUI 230 will be discussed herein.
  • In an embodiment, deployment system 200 includes repository 250. Repository 250 is programmed or configured to receive and commit code received from the template engine 210 for a data pipeline system. Further details regarding repository 250 will be discussed herein. Repository 250 is communicatively coupled to pipeline deployment service 260.
  • In an embodiment, deployment system 200 includes pipeline deployment service 260. Pipeline deployment service 260 is programmed or configured to retrieve the committed source code from repository 250, execute it, and deploy the data pipeline system to a production environment 270. Further details regarding pipeline deployment service 260 will be discussed herein.
  • The present techniques provide improvements in the configuration, validation, and deployment of data pipeline systems. By unifying frequently-used data pipeline systems into templates, the deployment system is able to remove manual error in the configuration and deployment of data pipeline systems. The present techniques improve computational efficiency by minimizing inefficient implementation of frequently-used data pipeline systems and replacing them with templatized versions of the data pipeline systems that incorporate best practices and that can be customized as necessary by a user for a particular deployment.
  • 2.1 Templates
  • A template is stored digital data that identifies one or more job definitions for a particular data pipeline system. In an embodiment, a template can be implemented in any markup language or data format syntax, such as extensible markup language (XML), “YAML Ain't Markup Language” (YAML), or JavaScript Object Notation (JSON), and is stored in the form of digital data in a storage device or digital memory.
  • One or more templates 222A through 222N may be stored in template library 220. Each template 222A through 222N may cover a frequently-used data pipeline system. For example, different templates may exist for frequent data processing tasks such as data deduplication, data cleanup, data extraction, machine learning training and classifying, joining disparate data sources together, and other frequent data processing tasks.
  • A job definition is a set of computer-implemented instructions that can be used for creating, executing, and/or implementing a data processing job in a data pipeline system. In an embodiment, the execution of a data processing job may generate one or more output datasets. These output datasets may in turn be used as input datasets in further data processing jobs.
  • In an embodiment, a job definition may include a code identifier that identifies code for processing the data processing job. For example, the code identifier may be a method call, a function call, a pointer, a data object, a library, an executable file, a script, a macro, or some other identification of a set of programming instructions for performing the data processing job.
  • In an embodiment, a job definition may include one or more dataset dependency identifiers that each identify an input dataset for the particular data processing job. The dataset dependency identifiers may be used to determine how the particular data processing job is dependent on one or more additional data processing jobs or the output datasets of one or more additional data processing jobs. In the example of data pipeline system 100, a dataset dependency identifier for the job definition of the featurize input data job 120 may identify the input dataset generated by read input data job 110. This dataset dependency identifier thus establishes that featurize input data job 120 is dependent on read input data job 110. Although the example of data pipeline system 100 shows a serial set of dependencies for the various jobs 110, 120, 130, and 140, in another embodiment, there may be multiple dependencies, non-serial dependencies, or another configuration of dependencies.
  • In an embodiment, a job definition may include one or more configuration parameters for processing a job. Configuration parameters are settings necessary for processing the job. In some embodiments, the value of a configuration parameter may be hard coded in the job definition. In some embodiments, the value of a configuration parameter may be received via GUI 230, as will be discussed herein.
  • In an embodiment, a template may include one or more global configuration parameters for deploying the data pipeline system. In some embodiments, the value of a configuration parameter may be hard coded in the template. In some embodiments, the value of a configuration parameter may be received via GUI 230, as will be discussed herein.
  • Table 1 displays an example of an excerpt of a template for performing machine learning.
  • TABLE 1
    {
    ″root″: ″{{ root }}/″,
    ″job_definitions″: {
      “read_input_data”: {
        ″transaction_type″: ″TransactionType.SNAPSHOT″,
        ″path″: ″{{ root }}/Input_Data″,
        ″code_identifier″: {
          ″class″: ″ReadInputData″,
          ″parameters″: {{ file_path }}
         },
        ″dependencies″: null,
        ″schema″: null
      }
      “featurize”: {
        ″transaction_type″: ″TransactionType.SNAPSHOT″,
        ″path″: ″{{ root }}/Featurize″,
        ″code_identifier″: {
          ″class″: ″Featurize″,
          ″parameters″: {
            “featurized_columns”:
              {{ featurize.featurize_columns }},
            “featurizers”:
              {{ featurize.comparators }}
          }
         },
        ″dependencies″: {
          “input” : ″read_input_data″
        }
        ″schema″: null
      }
      “train_model”: {
        ″transaction_type″: ″TransactionType.SNAPSHOT″,
        ″path″: ″{{ root }}/Train_Model″,
        ″code_identifier″: {
          ″class″: ″Train_model″,
          ″parameters″: {
            “number_iterations”:
                {{ train_model.iterations }},
            “classifier_model”:
                {{ train_model.classifier}}
          }
         },
        ″dependencies″: {
          “training_data”: “featurize”
        }
        ″schema″: null
      }
      “apply_model”: {
        ″transaction_type″: ″TransactionType.SNAPSHOT″,
        ″path″: ″{{ root }}/Apply_Model″,
        ″code_identifier″: {
          ″class″: ″ReadInputData″,
          ″parameters″: {
            “test_data”: {{ test_data_filepath }}
          }
         },
        ″dependencies″: {
          “model”: “train_model”
        }
        ″schema″: null
      }
    }
  • Table 1 illustrates an example template for data pipeline system 100, according to one embodiment. In this particular example, the template of Table 1 is written in JSON, but another markup language or syntax may be used in other embodiments. The template of Table 1 includes job definitions for four jobs: read_input_data, featurize, train_model, and apply_model which correspond to jobs 110, 120, 130, and 140 respectively. Each of the job definitions provides configuration parameters for processing jobs 110, 120, 130, and 140.
  • Table 1 includes a “root” tag that identifies the root directory where the data pipeline system will be deployed. The “root” tag is an example of a global configuration parameter to be used across multiple data processing jobs. The double curly brackets in this example illustrates that the root parameter is a value that will be provided by a user via GUI 230 instead of hard coded, however, in other embodiments, different syntax may be used.
  • Table 1 includes a featurize job definition. The job definition for the featurize job includes a “transaction_type” tag. The transaction_type defines how the output of the job is going to be used. A SNAPSHOT value for the transaction_type indicates that the output of the job will be a newly generated dataset. An APPEND value for the transaction_type indicates that the output of the job will be appended to an existing dataset (not depicted in Table 1).
  • The job definition for the featurize job includes a “path” tag. The path tag defines where the dataset that is going to be generated by the job is going to be output. In the example of the featurize job, the output of the job will be in a path in a subdirectory “/Featurize” under the root directory.
  • The job definition for the featurize job in Table 1 includes a “code_identifier” tag. The code_identifier tag defines the target code that needs to be executed in order to process the job. In this example, the class “Featurize” can be used for processing the job. The code identifier includes a “parameters” tag that identifies one or more configuration parameters for executing the code. In this example, the configuration parameters for the featurize job includes a “featurized_columns” configuration parameter and a “featurizers” configuration parameter. The curly brackets for these configuration parameters indicates that the values for these configuration parameters will be provided by a user via GUI 230. In other embodiments, the values of configuration parameters may be hard coded in the job definition itself.
  • The job definition for the featurize job in Table 1 includes a “dependencies” tag. The dependencies tag identifies one or more input datasets for the featurize job. In this example, the featurize job takes as an input the dataset generated by the “read_input_data” job defined earlier in the template. Thus, the featurize job will use as input the output of the read_input_data job. The dependencies information thus describes how the different jobs in a template are interrelated and/or dependent on one another. Although the example illustrated here only identifies a single dependency, in other embodiments, multiple dependencies may exist.
  • The job definition for the featurize job in Table 1 includes a “schema” tag. The schema tag identifies a schema for an output dataset for the job. In this case, no schema is specified for the job. In other embodiments, a schema may identify various characteristics of the output dataset, such as data types, expected values, column names, etc.
  • A template thus allows the deployment system 200 to use out-of-the-box configurations for frequently-used pipelines, while still allowing customization of specific configuration parameters by a user. This results efficiency and ease of deployment of a pipeline.
  • 2.2 Graphical User Interface (GUI)
  • GUI 230 may be accessible by one or more computing devices 240 and will allow users to interact with the configuration, validation, and deployment of a data pipeline system. In an embodiment, template engine 210 can use a template 222 to generate a GUI 230. Template engine 210 can then receive configuration parameter values received via GUI 230.
  • FIG. 3 illustrates an example user interface 300 of GUI 230 for initial configuration of a data pipeline. In an embodiment, user interface 300 may include one or more user inputs 310 and 320 for receiving configuration values for global configuration parameters. For example, user input 310 is used to receive a value for a configuration parameter that defines the pipeline root directory. User input 320 is used to receive a value for a configuration parameter that identifies which pipeline template or pipeline templates should be used during configuration. In an embodiment, the user input 320 may include a dropdown or selection of available templates 222. Template engine 210 may retrieve the list of available templates 222 from template library 220 and display the list of available templates 222 via user input 320 to allow a user to easily select a template from available templates. User interface 300 includes a user input 330 to proceed to the next step in the pipeline configuration.
  • FIG. 4 illustrates an example user interface 400 of GUI 230 for configuration of a particular data processing job. User interface 400 may be displayed after the user selected user input 330 in FIG. 3. The example of user interface 400 corresponds to the job definition for the read_input_data job described above with reference to Table 1. User input 410 allows a user to select which job in the data pipeline system they want to view. In this particular case, the user input 410 is set to view the read_input_data job. In an embodiment, the values of the user input 410 may include a list of one or more of the job definitions in the template that was selected with reference to user interface 300.
  • User interface 400 may include a display 420 that shows the input datasets for the particular data processing job. The input datasets are the dataset dependencies that are defined in the template for the particular data processing job. In this particular example, since read_input_data does not have any dataset dependencies, the value of display 420 is empty.
  • User interface 400 may include a help display 430 that provides a user with information regarding the data processing job. In an embodiment, the information displayed in help display 430 may be defined in the job definition of the template.
  • User interface 400 may include a set of configuration parameters 440. For example, configuration parameters 440 includes a user input 450 for receiving a value for the file_path parameter. In an embodiment, the contents of configuration parameters 440 may be displayed based on the job definition in the template. For example, Table 1 shows that the file_path parameter is a parameter for the read_input_data job, and the value needs to be provided by a user.
  • User interface 400 may include a user input 460 to perform validation of the currently selected data processing job. For example, by selecting user input 460, a user could attempt to validate the configuration parameters and other settings of the read_input_data job. Further details regarding job validation will be described herein.
  • User interface 400 may include a user input 470 to initiate the deployment of the data pipeline system by template engine 210. Further details regarding the deployment of the data pipeline system by template engine 210 will be described herein.
  • FIG. 5 illustrates an example user interface 500 of GUI 230 for configuration of a particular data processing job. The example of user interface 500 corresponds to the job definition for the featurize job described above with reference to Table 1. User input 510 allows a user to select which job in the data pipeline system they want to view. In this particular case, the user input 510 is set to view the featurize job. For example, the user may have navigated to user interface 500 by selecting the “featurize” option from user input 410 in the prior user interface 400.
  • User interface 500 may include a display 520 that shows the input datasets for the particular data processing job. In this particular example, since the featurize job has read_input_data as a dataset dependency, display 520 shows that read_input_data is an input dataset. This allows a user to quickly see the interrelated dependencies amongst datasets and/or jobs in the data pipeline system.
  • User interface 500 may include a help display 530 that provides a user with information regarding the data processing job. In an embodiment, the information displayed in help display 530 may be defined in the job definition of the template.
  • User interface 500 may include a set of configuration parameters 540. For example, configuration parameters 540 includes a user input 550 for receiving a value for the featurized_columns parameter. Configuration parameters 540 includes a user input 552 for receiving a value for the featurizers parameter. In an embodiment, the contents of configuration parameters 540 may be displayed based on the job definition in the template. For example, Table 1 shows that featurized_columns and featurizers are parameters provided by a user for the featurize job.
  • User interface 500 may include a user input 560 to perform validation of the currently selected data processing job. For example, by selecting user input 560, a user could attempt to validate the configuration parameters and other settings of the featurize job. Further details regarding job validation will be described herein.
  • User interface 500 may include a user input 570 to initiate the deployment of the data pipeline system by template engine 210. Further details regarding the deployment of the data pipeline system by template engine 210 will be described herein.
  • In an embodiment, GUI 230 may also be communicatively coupled to production environment 270 and may be used to view the status and health of the deployed pipeline system in production environment 270. Further details regarding viewing the status and health of a pipeline may be found in U.S. Pat. No. 9,678,850 (“Data Pipeline Monitoring”), which is incorporated by reference as if fully set forth herein.
  • 2.3 Deployment of Data Pipeline System
  • In an embodiment, template engine 220 is programmed or configured to use a template 222 for a data pipeline system in combination with one or more configuration parameter values received via GUI 230 to configure, validate, and generate a set of code for deployment of the data pipeline system.
  • Upon receiving a template 222 from template library 220, template engine 210 may cause to be displayed in GUI 230 one or more user interfaces for receiving configuration parameter values, such as the user interfaces displayed earlier with respect FIGS. 3, 4, and 5. Once template engine 210 has received all the necessary configuration parameter values for a template 222, template engine 210 can send the appropriate code to repository 250 for storage.
  • For each job definition in the template 222, template engine 210 can retrieve or execute the relevant computer code identified by the code identifier in the job definition. Template engine 210 can then use the retrieved computer code, the dataset dependencies, the configuration parameters, the values of configuration parameters received from the GUI 230, to prepare a set of jobs defined by the template for inclusion in a data pipeline system. In an embodiment, preparation of the set of jobs may include copying the relevant code, parameters, values, templates, and datasets and storing them in a repository 250.
  • Repository 250 is programmed or configured to serve as an archive for storing, managing, and accessing computer code and other digital data. For example, in one embodiment, repository 250 may be programmed or configured to allow for the checking in, checking out, committing, merging, branching, forking, or other management of computer code and other digital data. Computer code can be in any programming language, including, but not limited to Java, Structured Query Language (SQL), Python, Scala, etc. Repository 250 may be programmed or configured to provide version control for source code files. In one embodiment, repository 250 may be accessible via a web interface and/or a command line interface. In one embodiment, repository 250 may be implemented as a GIT repository.
  • Pipeline deployment service 260 is programmed or configured to retrieve computer code from repository 250 and deploy it to a production environment 270. In an embodiment, pipeline deployment service 260 is programmed or configured to compile and build the computer code, using the template, into a set of executable code, such as a JAR file, SQL file, executable file (.EXE), library, plugin, or any other form of executable code. In one embodiment, the computer code may be compiled and built into multiple sets of executable code. For example, each job definition in the template may correspond to one or more sets of executable code. In an embodiment, pipeline deployment service 260 may be implemented as a Gradle build system. The end result may be the deployment of the data pipeline system specified in the template into production environment 270.
  • 2.4 Validation
  • In an embodiment, deployment system 200 may also perform validation of data processing job configurations during setup. For example, if a user selects user inputs 460 or 560, the template engine 210 can execute validation of the data processing job based on the provided configuration parameter values.
  • FIG. 6 illustrates an example of a validation screen for the featurize job after a user selects user input 560. Selection of user input 560 causes template engine 210 to use the provided configuration parameter values as well as the selected template, and attempts to perform validation of the data processing job based on those settings. In an embodiment, the selected source code in the job definition of the template may include one or more functions, methods, or other sequences of instructions for validation of the data processing job. The sequences of instructions for validation can be used to generate validation results 610. For example, in one embodiment, the sequence of instructions for validation may cause the data processing job, as well as any dataset dependencies or jobs, to be compiled, built, executed, and to apply specific validation criteria with the given parameter values. In an embodiment, validation criteria may be stored and/or defined in one or more of templates 222A through 222N. In another embodiment, validation criteria may be stored and/or defined in template engine 210. In another embodiment, validation criteria may be stored and/or defined in another data store coupled to template engine 210 (not depicted). In an embodiment, validation may be performed in a development environment separate from production environment 270 so that validation does not compromise production performance.
  • Validation results 610 shows the results of application of various validation criteria to the data processing job with the given configuration parameters and template. In the example of 610, there are four validation criteria: featurization_exception, date_entropy, region_count_NA, and region_count_EU. Each validation criteria may refer to a specific sequence of instructions to apply to the particular data processing job. The value of the validation criteria may indicate the return result for sequence of instructions. For example, in the example of featurization_exception, the validation logic found one exception when attempting to featurize the data provided by the configuration parameters. In an embodiment, the lower bound and upper bound may specify limits of acceptable values for the various validation criteria. In an embodiment, the “Is Valid?” field can show whether or not the value observed for a validation criteria is within the expected bounds.
  • Validation results 610 thus allows a user to quickly and easily troubleshoot the configuration of a data processing job early during configuration instead of waiting to deploy the data processing job in a production environment.
  • User input 630 allows a user to navigate back to the user interface 500 for the featurize job.
  • 3.0 Example Process and Algorithm
  • FIG. 7 illustrates a process 700 of configuring a data pipeline system for deployment. For purposes of illustrating a clear example, FIG. 7 is described with reference to deployment system 200, but other embodiments may implement or execute the process 700 using other computer systems. FIG. 7, and each other flow diagram in this disclosure, is intended to illustrate an algorithm that can be used as the basis of programming an implementation of one or more of the claims that are set forth herein, using digital computers and a programming language or development environment, and is illustrated and described at the level at which skilled persons, in the field to which this disclosure is directed, are accustomed to communicating with one another to identify or describe programs, methods, objects and the like that can provide a working system.
  • In step 710, template engine 210 is programmed or configured to retrieve a list of available templates 222A through 222N stored in template library 210. Each template 222 may provide configuration parameters for a plurality of data processing jobs in a frequently-used data pipeline system. The process 700 may then proceed to step 720.
  • In step 720, template engine 210 is programmed or configured to display the list of available templates 222A through 222N in GUI 230. GUI 230 thus allows a user accessing a computing device 240 to easily view which data pipeline systems are frequently used, making it easier and more efficient to deploy a frequently used data pipeline system. The process 700 may then proceed to step 730.
  • In step 730, template engine 210 receives a user input selecting a template for configuration and deployment via GUI 230. The user input may be provided by computing device 240. Template engine 210 retrieves the template 222 selected from template library 720. Template 222 includes a plurality of job definitions for a particular data pipeline system. The process 700 may then proceed to step 740.
  • In step 740, template engine 210 uses template 222 to display on GUI 230 one or more user interfaces for receiving configuration parameter values for the data pipeline system and its jobs. Examples of such user interfaces include user interfaces for receiving global configuration parameter values, as in user interface 300, and for receiving global configuration parameter values for specific jobs, as in user interfaces 400 and 500. Template engine 210 uses the various values specified in the template 222 to determine what information to display in the user interfaces, including what fields require user input. The process 700 may then proceed to step 750.
  • In step 750, template engine 210 receives a plurality of configuration parameter values for the data pipeline system and jobs via GUI 230. The configuration parameter values may be provided via user inputs. In an embodiment, configuration parameter values may be received for global configuration parameters and/or for one or more of each job definition in the template. The configuration parameter values may be stored. The process 700 may then proceed to step 760.
  • In step 760, once a user has provided a user input to deploy the pipeline, template engine 210 uses the template 222 and the various configuration parameter values stored in step 750 to prepare code for the deployment of the data pipeline system. For example, the template 222 will include one or more code identifiers for every job, which specifies a set of computing instructions, such as a method, function, script, or other sequence that can be used for the particular job. Template engine 210 will retrieve the appropriate code from a source code repository (not depicted), using the template and the configuration parameter values and prepare a set of code for each of the jobs in the data pipeline system. The process 700 may then proceed to step 770.
  • In step 770, template engine 210 will store the necessary template, configuration parameter values, prepared code, dataset dependencies, and other necessary digital data in repository 250. The process 700 may then proceed to step 780.
  • In step 780, pipeline deployment service 260 will retrieve the data stored in repository 250 in step 770. Pipeline deployment service 260 will then compile, build, and deploy the code to production environment 270.
  • 4.0 Implementation Mechanisms—Hardware Overview
  • Referring now to FIG. 8, it is a block diagram that illustrates a computing device 800 in which the example embodiment(s) of the present invention may be embodied. Computing device 800 and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s). Other computing devices suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions.
  • Computing device 800 may include a bus 802 or other communication mechanism for addressing main memory 806 and for transferring data between and among the various components of device 800.
  • Computing device 800 may also include one or more hardware processors 804 coupled with bus 802 for processing information. A hardware processor 804 may be a general purpose microprocessor, a system on a chip (SoC), or other processor.
  • Main memory 806, such as a random access memory (RAM) or other dynamic storage device, also may be coupled to bus 802 for storing information and software instructions to be executed by processor(s) 804. Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by processor(s) 804.
  • Software instructions, when stored in storage media accessible to processor(s) 804, render computing device 800 into a special-purpose computing device that is customized to perform the operations specified in the software instructions. The terms “software”, “software instructions”, “computer program”, “computer-executable instructions”, and “processor-executable instructions” are to be broadly construed to cover any machine-readable information, whether or not human-readable, for instructing a computing device to perform specific operations, and including, but not limited to, application software, desktop applications, scripts, binaries, operating systems, device drivers, boot loaders, shells, utilities, system software, JAVASCRIPT, web pages, web applications, plugins, embedded software, microcode, compilers, debuggers, interpreters, virtual machines, linkers, and text editors.
  • Computing device 800 also may include read only memory (ROM) 808 or other static storage device coupled to bus 802 for storing static information and software instructions for processor(s) 804.
  • One or more mass storage devices 810 may be coupled to bus 802 for persistently storing information and software instructions on fixed or removable media, such as magnetic, optical, solid-state, magnetic-optical, flash memory, or any other available mass storage technology. The mass storage may be shared on a network, or it may be dedicated mass storage. Typically, at least one of the mass storage devices 810 (e.g., the main hard disk for the device) stores a body of program and data for directing operation of the computing device, including an operating system, user application programs, driver and other support files, as well as other data files of all sorts.
  • Computing device 800 may be coupled via bus 802 to display 812, such as a liquid crystal display (LCD) or other electronic visual display, for displaying information to a computer user. In some configurations, a touch sensitive surface incorporating touch detection technology (e.g., resistive, capacitive, etc.) may be overlaid on display 812 to form a touch sensitive display for communicating touch gesture (e.g., finger or stylus) input to processor(s) 804.
  • An input device 814, including alphanumeric and other keys, may be coupled to bus 802 for communicating information and command selections to processor 804. In addition to or instead of alphanumeric and other keys, input device 814 may include one or more physical buttons or switches such as, for example, a power (on/off) button, a “home” button, volume control buttons, or the like.
  • Another type of user input device may be a cursor control 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • While in some configurations, such as the configuration depicted in FIG. 8, one or more of display 812, input device 814, and cursor control 816 are external components (i.e., peripheral devices) of computing device 800, some or all of display 812, input device 814, and cursor control 816 are integrated as part of the form factor of computing device 800 in other configurations.
  • Functions of the disclosed systems, methods, and modules may be performed by computing device 800 in response to processor(s) 804 executing one or more programs of software instructions contained in main memory 806. Such software instructions may be read into main memory 806 from another storage medium, such as storage device(s) 810. Execution of the software instructions contained in main memory 806 cause processor(s) 804 to perform the functions of the example embodiment(s).
  • While functions and operations of the example embodiment(s) may be implemented entirely with software instructions, hard-wired or programmable circuitry of computing device 800 (e.g., an ASIC, a FPGA, or the like) may be used in other embodiments in place of or in combination with software instructions to perform the functions, according to the requirements of the particular implementation at hand.
  • The term “storage media” as used herein refers to any non-transitory media that store data and/or software instructions that cause a computing device to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, non-volatile random access memory (NVRAM), flash memory, optical disks, magnetic disks, or solid-state drives, such as storage device 810. Volatile media includes dynamic memory, such as main memory 806. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, flash memory, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more software instructions to processor(s) 804 for execution. For example, the software instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the software instructions into its dynamic memory and send the software instructions over a telephone line using a modem. A modem local to computing device 800 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 802. Bus 802 carries the data to main memory 806, from which processor(s) 804 retrieves and executes the software instructions. The software instructions received by main memory 806 may optionally be stored on storage device(s) 810 either before or after execution by processor(s) 804.
  • Computing device 800 also may include one or more communication interface(s) 818 coupled to bus 802. A communication interface 818 provides a two-way data communication coupling to a wired or wireless network link 820 that is connected to a local network 822 (e.g., Ethernet network, Wireless Local Area Network, cellular phone network, Bluetooth wireless network, or the like). Communication interface 818 sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. For example, communication interface 818 may be a wired network interface card, a wireless network interface card with an integrated radio antenna, or a modem (e.g., ISDN, DSL, or cable modem).
  • Network link(s) 820 typically provide data communication through one or more networks to other data devices. For example, a network link 820 may provide a connection through a local network 822 to a host computer 824 or to data equipment operated by an Internet Service Provider (ISP) 826. ISP 826 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 828. Local network(s) 822 and Internet 828 use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link(s) 820 and through communication interface(s) 818, which carry the digital data to and from computing device 800, are example forms of transmission media.
  • Computing device 800 can send messages and receive data, including program code, through the network(s), network link(s) 820 and communication interface(s) 818. In the Internet example, a server 830 might transmit a requested code for an application program through Internet 828, ISP 826, local network(s) 822 and communication interface(s) 818.
  • The received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.
  • 5.0 Implementation Mechanisms—Software Overview
  • FIG. 9 is a block diagram of a software system 900 that may be employed for controlling the operation of computing device 800. Software system 900 and its components, including their connections, relationships, and functions, is meant to be exemplary only, and not meant to limit implementations of the example embodiment(s). Other software systems suitable for implementing the example embodiment(s) may have different components, including components with different connections, relationships, and functions.
  • Software system 900 is provided for directing the operation of computing device 800. Software system 900, which may be stored in system memory (RAM) 806 and on fixed storage (e.g., hard disk or flash memory) 810, includes a kernel or operating system (OS) 910.
  • The OS 910 manages low-level aspects of computer operation, including managing execution of processes, memory allocation, file input and output (I/O), and device I/O. One or more application programs, represented as 902A, 902B, 902C . . . 902N, may be “loaded” (e.g., transferred from fixed storage 810 into memory 806) for execution by the system 900. The applications or other software intended for use on device 900 may also be stored as a set of downloadable computer-executable instructions, for example, for downloading and installation from an Internet location (e.g., a Web server, an app store, or other online service).
  • Software system 900 includes a graphical user interface (GUI) 915, for receiving user commands and data in a graphical (e.g., “point-and-click” or “touch gesture”) fashion. These inputs, in turn, may be acted upon by the system 900 in accordance with instructions from operating system 910 and/or application(s) 902. The GUI 915 also serves to display the results of operation from the OS 910 and application(s) 902, whereupon the user may supply additional inputs or terminate the session (e.g., log off).
  • OS 910 can execute directly on the bare hardware 920 (e.g., processor(s) 804) of device 800. Alternatively, a hypervisor or virtual machine monitor (VMM) 930 may be interposed between the bare hardware 920 and the OS 910. In this configuration, VMM 930 acts as a software “cushion” or virtualization layer between the OS 910 and the bare hardware 920 of the device 800.
  • VMM 930 instantiates and runs one or more virtual machine instances (“guest machines”). Each guest machine comprises a “guest” operating system, such as OS 910, and one or more applications, such as application(s) 902, designed to execute on the guest operating system. The VMM 930 presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems.
  • In some instances, the VMM 930 may allow a guest operating system to run as if it is running on the bare hardware 920 of device 800 directly. In these instances, the same version of the guest operating system configured to execute on the bare hardware 920 directly may also execute on VMM 930 without modification or reconfiguration. In other words, VMM 930 may provide full hardware and CPU virtualization to a guest operating system in some instances.
  • In other instances, a guest operating system may be specially designed or configured to execute on VMM 930 for efficiency. In these instances, the guest operating system is “aware” that it executes on a virtual machine monitor. In other words, VMM 930 may provide para-virtualization to a guest operating system in some instances.
  • The above-described computer hardware and software is presented for purpose of illustrating the underlying computer components that may be employed for implementing the example embodiment(s). The example embodiment(s), however, are not necessarily limited to any particular computing environment or computing device configuration. Instead, the example embodiment(s) may be implemented in any type of system architecture or processing environment that one skilled in the art, in light of this disclosure, would understand as capable of supporting the features and functions of the example embodiment(s) presented herein.
  • 6.0 Other Aspects of Disclosure
  • Although some of the figures described in the foregoing specification include flow diagrams with steps that are shown in an order, the steps may be performed in any order, and are not limited to the order shown in those flowcharts. Additionally, some steps may be optional, may be performed multiple times, and/or may be performed by different components. All steps, operations and functions of a flow diagram that are described herein are intended to indicate operations that are performed using programming in a special-purpose computer or general-purpose computer, in various embodiments. In other words, each flow diagram in this disclosure, in combination with the related text herein, is a guide, plan or specification of all or part of an algorithm for programming a computer to execute the functions that are described. The level of skill in the field associated with this disclosure is known to be high, and therefore the flow diagrams and related text in this disclosure have been prepared to convey information at a level of sufficiency and detail that is normally expected in the field when skilled persons communicate among themselves with respect to programs, algorithms and their implementation.
  • In the foregoing specification, the example embodiment(s) of the present invention have been described with reference to numerous specific details. However, the details may vary from implementation to implementation according to the requirements of the particular implement at hand. The example embodiment(s) are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (20)

1. A method, comprising:
receiving a template that defines a plurality of job definitions;
wherein each particular job definition of the plurality of job definitions corresponds to a particular data processing job, and wherein each particular job definition comprises:
a code identifier that identifies code for processing the particular data processing job;
a plurality of dataset dependency identifiers that identify a plurality of input datasets for the particular data processing job;
a plurality of configuration parameters for processing the particular data processing job;
for each particular job definition of the plurality of job definitions:
based on the template, causing to be displayed a user interface for receiving a plurality of configuration parameter values for the plurality of configuration parameters for the particular job definition;
receiving a global configuration parameter value that applies to each job definition of the plurality of job definitions;
executing the corresponding particular data processing job for the particular job definition by executing the code for processing the particular data processing job, by using the input datasets for the particular data processing job and the global configuration parameter value;
wherein the method is performed using one or more processors.
2. The method of claim 1, wherein the method further comprises:
receiving a list of a plurality of available templates;
causing to be displayed in the user interface the plurality of available templates;
receiving a user input for selection of the template from the plurality of available templates.
3. The method of claim 1, wherein the global parameter value is received through a graphical user interface.
4. The method of claim 1, wherein the method further comprises:
in response to a command to perform a validation of a target data processing job that corresponds to a target job definition of the plurality of job definitions,
executing the target data processing job for the target job definition by executing the code for processing the target data processing job, by using the input datasets for the target data processing job and the plurality of configuration parameter values for the target data processing job;
applying one or more validation criteria to the target data processing job to generate a validation value.
5. The method of claim 4, wherein the method further comprises:
comparing the validation value to a pre-stored lower bound value and upper bound value;
based on the comparison, displaying whether the validation value is within the lower bound value and upper bound value.
6. The method of claim 1, wherein the plurality of job definitions define a data pipeline system for deduplication of data records.
7. The method of claim 1, wherein the plurality of job definitions define a data pipeline system for featurizing, training, and applying a machine learning model.
8. The method of claim 1, wherein the plurality of job definitions define a data pipeline system for data cleanup of data records.
9. The method of claim 1, wherein the plurality of job definitions define a data pipeline system for joining of data sources.
10. The method of claim 1, wherein the method further comprises:
prior to executing the corresponding particular data processing job, storing the code for processing the particular data processing job, the input datasets for the particular data processing job, and the plurality of configuration parameter values in a code repository with version control.
11. One or more non-transitory computer-readable media storing instructions, wherein the instructions, when executed by one or more hardware processors, cause:
receiving a template that defines a plurality of job definitions;
wherein each particular job definition of the plurality of job definitions corresponds to a particular data processing job, and wherein each particular job definition comprises:
a code identifier that identifies code for processing the particular data processing job;
a plurality of dataset dependency identifiers that identify a plurality of input datasets for the particular data processing job;
a plurality of configuration parameters for processing the particular data processing job;
for each particular job definition of the plurality of job definitions:
based on the template, causing to be displayed a user interface for receiving a plurality of configuration parameter values for the plurality of configuration parameters for the particular job definition;
receiving a global configuration parameter value that applies to each job definition of the plurality of job definitions;
executing the corresponding particular data processing job for the particular job definition by executing the code for processing the particular data processing job, by using the input datasets for the particular data processing job and the global configuration parameter value;
wherein the method is performed using one or more processors.
12. The one or more non-transitory computer-readable media of claim 11, wherein the instructions further comprise instructions for:
receiving a list of a plurality of available templates;
causing to be displayed in the user interface the plurality of available templates;
receiving a user input for selection of the template from the plurality of available templates.
13. The one or more non-transitory computer-readable media of claim 11, wherein the global parameter value is received through a graphical user interface.
14. The one or more non-transitory computer-readable media of claim 11, wherein the instructions further comprise instructions for:
in response to a command to perform a validation of a target data processing job that corresponds to a target job definition of the plurality of job definitions,
executing the target data processing job for the target job definition by executing the code for processing the target data processing job, by using the input datasets for the target data processing job and the plurality of configuration parameter values for the target data processing job;
applying one or more validation criteria to the target data processing job to generate a validation value.
15. The one or more non-transitory computer-readable media of claim 14, wherein the instructions further comprise instructions for:
comparing the validation value to a pre-stored lower bound value and upper bound value;
based on the comparison, displaying whether the validation value is within the lower bound value and upper bound value.
16. The one or more non-transitory computer-readable media of claim 11, wherein the plurality of job definitions define a data pipeline system for deduplication of data records.
17. The one or more non-transitory computer-readable media of claim 11, wherein the plurality of job definitions define a data pipeline system for featurizing, training, and applying a machine learning model.
18. The one or more non-transitory computer-readable media of claim 11, wherein the plurality of job definitions define a data pipeline system for data cleanup of data records.
19. The one or more non-transitory computer-readable media of claim 11, wherein the plurality of job definitions define a data pipeline system for joining of data sources.
20. The one or more non-transitory computer-readable media of claim 11, wherein the instructions further comprise instructions for:
prior to executing the corresponding particular data processing job, storing the code for processing the particular data processing job, the input datasets for the particular data processing job, and the plurality of configuration parameter values in a code repository with version control.
US16/706,094 2017-06-30 2019-12-06 Techniques for configuring and validating a data pipeline deployment Abandoned US20200110590A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/706,094 US20200110590A1 (en) 2017-06-30 2019-12-06 Techniques for configuring and validating a data pipeline deployment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762527988P 2017-06-30 2017-06-30
US15/977,666 US10534595B1 (en) 2017-06-30 2018-05-11 Techniques for configuring and validating a data pipeline deployment
US16/706,094 US20200110590A1 (en) 2017-06-30 2019-12-06 Techniques for configuring and validating a data pipeline deployment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/977,666 Continuation US10534595B1 (en) 2017-06-30 2018-05-11 Techniques for configuring and validating a data pipeline deployment

Publications (1)

Publication Number Publication Date
US20200110590A1 true US20200110590A1 (en) 2020-04-09

Family

ID=69141117

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/977,666 Active US10534595B1 (en) 2017-06-30 2018-05-11 Techniques for configuring and validating a data pipeline deployment
US16/706,094 Abandoned US20200110590A1 (en) 2017-06-30 2019-12-06 Techniques for configuring and validating a data pipeline deployment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/977,666 Active US10534595B1 (en) 2017-06-30 2018-05-11 Techniques for configuring and validating a data pipeline deployment

Country Status (1)

Country Link
US (2) US10534595B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11307963B2 (en) * 2020-07-29 2022-04-19 Instabase, Inc. Systems and methods for automatically modifying pipelined enterprise software
US20230161569A1 (en) * 2021-11-22 2023-05-25 Xilinx, Inc. Synthesis flow for data processing engine array applications relying on hardware library packages
US11830045B2 (en) 2020-07-28 2023-11-28 Instabase, Inc. Systems and methods for user-specific distribution of enterprise software and compensation for user-specific monitored usage of the enterprise software

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112752975A (en) * 2018-09-27 2021-05-04 株式会社岛津制作所 Confirmation device, confirmation method, and confirmation program
US11029936B2 (en) * 2019-04-11 2021-06-08 Microsoft Technology Licensing, Llc Deploying packages to devices in a fleet in stages
US11221837B2 (en) 2019-04-11 2022-01-11 Microsoft Technology Licensing, Llc Creating and deploying packages to devices in a fleet based on operations derived from a machine learning model
US11144373B2 (en) * 2019-05-07 2021-10-12 Sap Se Data pipeline using a pluggable topology connecting components without altering code of the components
US11150895B1 (en) * 2019-07-26 2021-10-19 Stripe, Inc. Automatically deploying artifacts
US11347491B2 (en) * 2019-11-15 2022-05-31 Mastercard International Incorporated Containerized application deployment
US11210070B2 (en) * 2019-11-19 2021-12-28 Cognizant Technology Solutions India Pvt. Ltd. System and a method for automating application development and deployment
CN112394949B (en) * 2020-12-03 2022-04-22 中国科学院软件研究所 Service version dynamic configuration method for continuous integration
CN113127058B (en) * 2021-04-28 2024-01-16 北京百度网讯科技有限公司 Data labeling method, related device and computer program product
CN113419746B (en) * 2021-05-21 2022-11-08 济南浪潮数据技术有限公司 Method, system, storage medium and equipment for installing render-CSI plug-in
US11868749B2 (en) * 2022-01-14 2024-01-09 Discover Financial Services Configurable deployment of data science models

Family Cites Families (169)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69126795T2 (en) 1991-03-12 1998-02-19 Wang Laboratories FILE MANAGEMENT SYSTEM WITH GRAPHIC USER INTERFACE FOR QUESTIONS
US5426747A (en) 1991-03-22 1995-06-20 Object Design, Inc. Method and apparatus for virtual memory mapping and transaction management in an object-oriented database system
US5428737A (en) 1991-10-16 1995-06-27 International Business Machines Corporation Comprehensive bilateral translation between SQL and graphically depicted queries
JPH0689307A (en) 1992-05-04 1994-03-29 Internatl Business Mach Corp <Ibm> Device and method for displaying information in database
JP2710548B2 (en) 1993-03-17 1998-02-10 インターナショナル・ビジネス・マシーンズ・コーポレイション Method for retrieving data and converting between Boolean algebraic and graphic representations
US5794228A (en) 1993-04-16 1998-08-11 Sybase, Inc. Database system with buffer manager providing per page native data compression and decompression
US5918225A (en) 1993-04-16 1999-06-29 Sybase, Inc. SQL-based database system with improved indexing methodology
US5794229A (en) 1993-04-16 1998-08-11 Sybase, Inc. Database system with methodology for storing a database table by vertically partitioning all columns of the table
US5608899A (en) 1993-06-04 1997-03-04 International Business Machines Corporation Method and apparatus for searching a database by interactively modifying a database query
US5911138A (en) 1993-06-04 1999-06-08 International Business Machines Corporation Database search facility having improved user interface
US5613105A (en) 1993-06-30 1997-03-18 Microsoft Corporation Efficient storage of objects in a file system
US6877137B1 (en) 1998-04-09 2005-04-05 Rose Blush Software Llc System, method and computer program product for mediating notes and note sub-notes linked or otherwise associated with stored or networked web pages
US5742806A (en) 1994-01-31 1998-04-21 Sun Microsystems, Inc. Apparatus and method for decomposing database queries for database management system including multiprocessor digital data processing system
US5560005A (en) 1994-02-25 1996-09-24 Actamed Corp. Methods and systems for object-based relational distributed databases
US5542089A (en) 1994-07-26 1996-07-30 International Business Machines Corporation Method and apparatus for estimating the number of occurrences of frequent values in a data set
US6321274B1 (en) 1996-06-28 2001-11-20 Microsoft Corporation Multiple procedure calls in a single request
US5857329A (en) 1997-03-14 1999-01-12 Deere & Company One-piece combined muffler exhaust outlet and exhaust gas deflector
US6208985B1 (en) 1997-07-09 2001-03-27 Caseventure Llc Data refinery: a direct manipulation user interface for data querying with integrated qualitative and quantitative graphical representations of query construction and query result presentation
US6236994B1 (en) 1997-10-21 2001-05-22 Xerox Corporation Method and apparatus for the integration of information and knowledge
US6178519B1 (en) 1998-12-10 2001-01-23 Mci Worldcom, Inc. Cluster-wide database system
KR100313198B1 (en) 1999-03-05 2001-11-05 윤덕용 Multi-dimensional Selectivity Estimation Using Compressed Histogram Information
US7418399B2 (en) 1999-03-10 2008-08-26 Illinois Institute Of Technology Methods and kits for managing diagnosis and therapeutics of bacterial infections
US6560774B1 (en) 1999-09-01 2003-05-06 Microsoft Corporation Verifier to check intermediate language
US6745382B1 (en) 2000-04-13 2004-06-01 Worldcom, Inc. CORBA wrappers for rules automation technology
US8386945B1 (en) 2000-05-17 2013-02-26 Eastman Kodak Company System and method for implementing compound documents in a production printing workflow
GB2366498A (en) 2000-08-25 2002-03-06 Copyn Ltd Method of bookmarking a section of a web-page and storing said bookmarks
US6976024B1 (en) 2000-10-12 2005-12-13 International Buisness Machines Corporation Batch submission API
US6857120B1 (en) 2000-11-01 2005-02-15 International Business Machines Corporation Method for characterizing program execution by periodic call stack inspection
US7370040B1 (en) 2000-11-21 2008-05-06 Microsoft Corporation Searching with adaptively configurable user interface and extensible query language
WO2002063535A2 (en) 2001-02-07 2002-08-15 Exalt Solutions, Inc. Intelligent multimedia e-catalog
US7499922B1 (en) 2001-04-26 2009-03-03 Dakota Software Corp. Information retrieval system and method
US7877421B2 (en) 2001-05-25 2011-01-25 International Business Machines Corporation Method and system for mapping enterprise data assets to a semantic information model
US7155728B1 (en) 2001-06-28 2006-12-26 Microsoft Corporation Remoting features
US7100147B2 (en) 2001-06-28 2006-08-29 International Business Machines Corporation Method, system, and program for generating a workflow
WO2003005279A1 (en) 2001-07-03 2003-01-16 Altaworks Corporation System and methods for monitoring performance metrics
US20030023620A1 (en) 2001-07-30 2003-01-30 Nicholas Trotta Creation of media-interaction profiles
US7028223B1 (en) 2001-08-13 2006-04-11 Parasoft Corporation System and method for testing of web services
US7165101B2 (en) 2001-12-03 2007-01-16 Sun Microsystems, Inc. Transparent optimization of network traffic in distributed systems
US7519589B2 (en) 2003-02-04 2009-04-14 Cataphora, Inc. Method and apparatus for sociological data analysis
US20050021397A1 (en) 2003-07-22 2005-01-27 Cui Yingwei Claire Content-targeted advertising using collected user behavior data
US20040012633A1 (en) 2002-04-26 2004-01-22 Affymetrix, Inc., A Corporation Organized Under The Laws Of Delaware System, method, and computer program product for dynamic display, and analysis of biological sequence data
US20040126840A1 (en) 2002-12-23 2004-07-01 Affymetrix, Inc. Method, system and computer software for providing genomic ontological data
US7127467B2 (en) 2002-05-10 2006-10-24 Oracle International Corporation Managing expressions in a database system
US8244895B2 (en) 2002-07-15 2012-08-14 Hewlett-Packard Development Company, L.P. Method and apparatus for applying receiving attributes using constraints
GB0221257D0 (en) 2002-09-13 2002-10-23 Ibm Automated testing
US7383513B2 (en) 2002-09-25 2008-06-03 Oracle International Corporation Graphical condition builder for facilitating database queries
US20040088177A1 (en) 2002-11-04 2004-05-06 Electronic Data Systems Corporation Employee performance management method and system
US7546607B2 (en) 2002-11-19 2009-06-09 Microsoft Corporation Native code exposing virtual machine managed object
US7243093B2 (en) 2002-11-27 2007-07-10 International Business Machines Corporation Federated query management
US7099888B2 (en) 2003-03-26 2006-08-29 Oracle International Corporation Accessing a remotely located nested object
US7369912B2 (en) 2003-05-29 2008-05-06 Fisher-Rosemount Systems, Inc. Batch execution engine with independent batch execution processes
US7620648B2 (en) 2003-06-20 2009-11-17 International Business Machines Corporation Universal annotation configuration and deployment
US7216133B2 (en) 2003-07-29 2007-05-08 Microsoft Corporation Synchronizing logical views independent of physical storage representations
US20050043979A1 (en) * 2003-08-22 2005-02-24 Thomas Soares Process for executing approval workflows and fulfillment workflows
US20050044099A1 (en) * 2003-08-22 2005-02-24 Thomas Soares Process for creating an information services catalog
US20050060662A1 (en) * 2003-08-22 2005-03-17 Thomas Soares Process for creating service action data structures
US20050235274A1 (en) * 2003-08-27 2005-10-20 Ascential Software Corporation Real time data integration for inventory management
EP1725922A4 (en) * 2003-10-30 2008-11-12 Lavastorm Technologies Inc Methods and systems for automated data processing
WO2005050625A2 (en) 2003-11-14 2005-06-02 Senvid, Inc. Managed peer-to-peer applications in a secure network
US7343552B2 (en) 2004-02-12 2008-03-11 Fuji Xerox Co., Ltd. Systems and methods for freeform annotations
US7085890B2 (en) 2004-02-19 2006-08-01 International Business Machines Corporation Memory mapping to reduce cache conflicts in multiprocessor systems
US20050226473A1 (en) 2004-04-07 2005-10-13 Subramanyan Ramesh Electronic Documents Signing and Compliance Monitoring Invention
EP1769435A1 (en) 2004-05-25 2007-04-04 Arion Human Capital Limited Data analysis and flow control system
US8055672B2 (en) 2004-06-10 2011-11-08 International Business Machines Corporation Dynamic graphical database query and data mining interface
US7406592B1 (en) 2004-09-23 2008-07-29 American Megatrends, Inc. Method, system, and apparatus for efficient evaluation of boolean expressions
US7512738B2 (en) 2004-09-30 2009-03-31 Intel Corporation Allocating call stack frame entries at different memory levels to functions in a program
US7366723B2 (en) 2004-10-05 2008-04-29 Sap Ag Visual query modeling for configurable patterns
US20060080616A1 (en) 2004-10-13 2006-04-13 Xerox Corporation Systems, methods and user interfaces for document workflow construction
GB0422750D0 (en) 2004-10-13 2004-11-17 Ciphergrid Ltd Remote database technique
CA2484694A1 (en) 2004-10-14 2006-04-14 Alcatel Database ram cache
US20060129992A1 (en) 2004-11-10 2006-06-15 Oberholtzer Brian K Software test and performance monitoring system
US7783679B2 (en) 2005-01-12 2010-08-24 Computer Associates Think, Inc. Efficient processing of time series data
US7483028B2 (en) 2005-03-15 2009-01-27 Microsoft Corporation Providing 1D and 2D connectors in a connected diagram
WO2006102270A2 (en) 2005-03-22 2006-09-28 Cooper Kim A Performance motivation systems and methods for contact centers
US8020110B2 (en) 2005-05-26 2011-09-13 Weisermazars Llp Methods for defining queries, generating query results and displaying same
US7962842B2 (en) 2005-05-30 2011-06-14 International Business Machines Corporation Method and systems for accessing data by spelling discrimination letters of link names
US8161122B2 (en) 2005-06-03 2012-04-17 Messagemind, Inc. System and method of dynamically prioritized electronic mail graphical user interface, and measuring email productivity and collaboration trends
US7571192B2 (en) 2005-06-15 2009-08-04 Oracle International Corporation Methods and apparatus for maintaining consistency during analysis of large data sets
US20070005582A1 (en) 2005-06-17 2007-01-04 Honeywell International Inc. Building of database queries from graphical operations
US20100199167A1 (en) 2005-06-24 2010-08-05 Justsystems Corporation Document processing apparatus
US20070178501A1 (en) 2005-12-06 2007-08-02 Matthew Rabinowitz System and method for integrating and validating genotypic, phenotypic and medical information into a database according to a standardized ontology
CN1913441A (en) 2005-08-09 2007-02-14 张永敏 Continuous changed data set transmission and updating method
US20070094248A1 (en) 2005-09-26 2007-04-26 Bea Systems, Inc. System and method for managing content by workflows
US7870512B2 (en) 2005-12-28 2011-01-11 Sap Ag User interface (UI) prototype using UI taxonomy
US7801912B2 (en) 2005-12-29 2010-09-21 Amazon Technologies, Inc. Method and apparatus for a searchable data service
US7831917B1 (en) 2005-12-30 2010-11-09 Google Inc. Method, system, and graphical user interface for identifying and communicating with meeting spots
US20070192281A1 (en) 2006-02-02 2007-08-16 International Business Machines Corporation Methods and apparatus for displaying real-time search trends in graphical search specification and result interfaces
US7853573B2 (en) 2006-05-03 2010-12-14 Oracle International Corporation Efficient replication of XML data in a relational database management system
US20070260582A1 (en) 2006-05-05 2007-11-08 Inetsoft Technology Method and System for Visual Query Construction and Representation
US7853614B2 (en) 2006-11-27 2010-12-14 Rapleaf, Inc. Hierarchical, traceable, and association reputation assessment of email domains
US7680939B2 (en) 2006-12-20 2010-03-16 Yahoo! Inc. Graphical user interface to manipulate syndication data feeds
US8799871B2 (en) 2007-01-08 2014-08-05 The Mathworks, Inc. Computation of elementwise expression in parallel
US8171418B2 (en) 2007-01-31 2012-05-01 Salesforce.Com, Inc. Method and system for presenting a visual representation of the portion of the sets of data that a query is expected to return
CN101246486B (en) 2007-02-13 2012-02-01 国际商业机器公司 Method and apparatus for improved process of expressions
US7689624B2 (en) 2007-03-01 2010-03-30 Microsoft Corporation Graph-based search leveraging sentiment analysis of user comments
US8386996B2 (en) 2007-06-29 2013-02-26 Sap Ag Process extension wizard for coherent multi-dimensional business process models
US20090006150A1 (en) 2007-06-29 2009-01-01 Sap Ag Coherent multi-dimensional business process model
US7761525B2 (en) 2007-08-23 2010-07-20 International Business Machines Corporation System and method for providing improved time references in documents
US20090083275A1 (en) 2007-09-24 2009-03-26 Nokia Corporation Method, Apparatus and Computer Program Product for Performing a Visual Search Using Grid-Based Feature Organization
US8417715B1 (en) 2007-12-19 2013-04-09 Tilmann Bruckhaus Platform independent plug-in methods and systems for data mining and analytics
US20090161147A1 (en) 2007-12-20 2009-06-25 Sharp Laboratories Of America, Inc. Personal document container
US20090172674A1 (en) 2007-12-28 2009-07-02 International Business Machines Corporation Managing the computer collection of information in an information technology environment
US7877367B2 (en) 2008-01-22 2011-01-25 International Business Machines Corporation Computer method and apparatus for graphical inquiry specification with progressive summary
US20090193012A1 (en) 2008-01-29 2009-07-30 James Charles Williams Inheritance in a Search Index
US20090199047A1 (en) 2008-01-31 2009-08-06 Yahoo! Inc. Executing software performance test jobs in a clustered system
US9274923B2 (en) 2008-03-25 2016-03-01 Wind River Systems, Inc. System and method for stack crawl testing and caching
US20090282068A1 (en) 2008-05-12 2009-11-12 Shockro John J Semantic packager
US8499287B2 (en) 2008-06-23 2013-07-30 Microsoft Corporation Analysis of thread synchronization events
US7908521B2 (en) 2008-06-25 2011-03-15 Microsoft Corporation Process reflection
AU2009201514A1 (en) 2008-07-11 2010-01-28 Icyte Pty Ltd Annotation system and method
US10747952B2 (en) 2008-09-15 2020-08-18 Palantir Technologies, Inc. Automatic creation and server push of multiple distinct drafts
KR101495132B1 (en) 2008-09-24 2015-02-25 삼성전자주식회사 Mobile terminal and method for displaying data thereof
CN101685449B (en) 2008-09-26 2012-07-11 国际商业机器公司 Method and system for connecting tables in a plurality of heterogeneous distributed databases
US9032254B2 (en) 2008-10-29 2015-05-12 Aternity Information Systems Ltd. Real time monitoring of computer for determining speed and energy consumption of various processes
US8103962B2 (en) 2008-11-04 2012-01-24 Brigham Young University Form-based ontology creation and information harvesting
US8805861B2 (en) 2008-12-09 2014-08-12 Google Inc. Methods and systems to train models to extract and integrate information from data sources
US8312038B2 (en) 2008-12-18 2012-11-13 Oracle International Corporation Criteria builder for query builder
US20100169376A1 (en) 2008-12-29 2010-07-01 Yahoo! Inc. Visual search engine for personal dating
US8073857B2 (en) 2009-02-17 2011-12-06 International Business Machines Corporation Semantics-based data transformation over a wire in mashups
US9268761B2 (en) 2009-06-05 2016-02-23 Microsoft Technology Licensing, Llc In-line dynamic text with variable formatting
US8606804B2 (en) 2009-08-05 2013-12-10 Microsoft Corporation Runtime-defined dynamic queries
US20110066497A1 (en) 2009-09-14 2011-03-17 Choicestream, Inc. Personalized advertising and recommendation
US20110074811A1 (en) 2009-09-25 2011-03-31 Apple Inc. Map Layout for Print Production
US9158816B2 (en) 2009-10-21 2015-10-13 Microsoft Technology Licensing, Llc Event processing with XML query based on reusable XML query template
US20110131547A1 (en) 2009-12-01 2011-06-02 International Business Machines Corporation Method and system defining and interchanging diagrams of graphical modeling languages
GB2476121A (en) 2009-12-14 2011-06-15 Colin Westlake Linking interactions using a reference for an internet user's web session
US20110208822A1 (en) 2010-02-22 2011-08-25 Yogesh Chunilal Rathod Method and system for customized, contextual, dynamic and unified communication, zero click advertisement and prospective customers search engine
US8739118B2 (en) 2010-04-08 2014-05-27 Microsoft Corporation Pragmatic mapping specification, compilation and validation
US20110258216A1 (en) 2010-04-20 2011-10-20 International Business Machines Corporation Usability enhancements for bookmarks of browsers
US8626770B2 (en) 2010-05-03 2014-01-07 International Business Machines Corporation Iceberg query evaluation implementing a compressed bitmap index
US8799867B1 (en) 2010-06-08 2014-08-05 Cadence Design Systems, Inc. Methods, systems, and articles of manufacture for synchronizing software verification flows
US8352908B2 (en) 2010-06-28 2013-01-08 International Business Machines Corporation Multi-modal conversion tool for form-type applications
CA2707916C (en) 2010-07-14 2015-12-01 Ibm Canada Limited - Ibm Canada Limitee Intelligent timesheet assistance
US20120078595A1 (en) 2010-09-24 2012-03-29 Nokia Corporation Method and apparatus for ontology matching
US8719252B2 (en) 2010-10-22 2014-05-06 Daniel Paul Miranker Accessing relational databases as resource description framework databases
US20120159449A1 (en) 2010-12-15 2012-06-21 International Business Machines Corporation Call Stack Inspection For A Thread Of Execution
US20120173381A1 (en) 2011-01-03 2012-07-05 Stanley Benjamin Smith Process and system for pricing and processing weighted data in a federated or subscription based data source
US8966486B2 (en) 2011-05-03 2015-02-24 Microsoft Corporation Distributed multi-phase batch job processing
US20130024268A1 (en) 2011-07-22 2013-01-24 Ebay Inc. Incentivizing the linking of internet content to products for sale
US9996807B2 (en) 2011-08-17 2018-06-12 Roundhouse One Llc Multidimensional digital platform for building integration and analysis
US20130054551A1 (en) 2011-08-24 2013-02-28 Sap Ag Global product database
GB201115083D0 (en) 2011-08-31 2011-10-19 Data Connection Ltd Identifying data items
US8433702B1 (en) 2011-09-28 2013-04-30 Palantir Technologies, Inc. Horizon histogram optimizations
US20130086482A1 (en) 2011-09-30 2013-04-04 Cbs Interactive, Inc. Displaying plurality of content items in window
US8560494B1 (en) 2011-09-30 2013-10-15 Palantir Technologies, Inc. Visual data importer
US8626545B2 (en) 2011-10-17 2014-01-07 CrowdFlower, Inc. Predicting future performance of multiple workers on crowdsourcing tasks and selecting repeated crowdsourcing workers
US8965422B2 (en) 2012-02-23 2015-02-24 Blackberry Limited Tagging instant message content for retrieval using mobile communication devices
US20130226944A1 (en) 2012-02-24 2013-08-29 Microsoft Corporation Format independent data transformation
US9378526B2 (en) 2012-03-02 2016-06-28 Palantir Technologies, Inc. System and method for accessing data objects via remote references
JP2014029282A (en) * 2012-07-31 2014-02-13 Shimadzu Corp Analysis device validation system, and program therefor
US9798768B2 (en) 2012-09-10 2017-10-24 Palantir Technologies, Inc. Search around visual queries
US9977788B2 (en) * 2012-09-14 2018-05-22 Salesforce.Com, Inc. Methods and systems for managing files in an on-demand system
US9348677B2 (en) 2012-10-22 2016-05-24 Palantir Technologies Inc. System and method for batch evaluation programs
US9471370B2 (en) 2012-10-22 2016-10-18 Palantir Technologies, Inc. System and method for stack-based batch evaluation of program instructions
US10108668B2 (en) 2012-12-14 2018-10-23 Sap Se Column smart mechanism for column based database
US8639552B1 (en) 2013-01-24 2014-01-28 Broadvision, Inc. Systems and methods for creating and sharing tasks
US9805407B2 (en) 2013-01-25 2017-10-31 Illumina, Inc. Methods and systems for using a cloud computing environment to configure and sell a biological sample preparation cartridge and share related data
US20140244388A1 (en) 2013-02-28 2014-08-28 MetroStar Systems, Inc. Social Content Synchronization
US9501202B2 (en) 2013-03-15 2016-11-22 Palantir Technologies, Inc. Computer graphical user interface with genomic workflow
US9898167B2 (en) 2013-03-15 2018-02-20 Palantir Technologies Inc. Systems and methods for providing a tagging interface for external content
US9792194B2 (en) 2013-10-18 2017-10-17 International Business Machines Corporation Performance regression manager for large scale systems
US9105000B1 (en) 2013-12-10 2015-08-11 Palantir Technologies Inc. Aggregating data from a plurality of data sources
US8935201B1 (en) 2014-03-18 2015-01-13 Palantir Technologies Inc. Determining and extracting changed data from a data source
US20160026923A1 (en) 2014-07-22 2016-01-28 Palantir Technologies Inc. System and method for determining a propensity of entity to take a specified action
US10417577B2 (en) * 2015-06-05 2019-09-17 Facebook, Inc. Machine learning system interface
US10395181B2 (en) * 2015-06-05 2019-08-27 Facebook, Inc. Machine learning system flow processing
US10643144B2 (en) * 2015-06-05 2020-05-05 Facebook, Inc. Machine learning system flow authoring tool
US10360069B2 (en) * 2016-02-05 2019-07-23 Sas Institute Inc. Automated transfer of neural network definitions among federated areas
US10176217B1 (en) * 2017-07-06 2019-01-08 Palantir Technologies, Inc. Dynamically performing data processing in a data pipeline system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11830045B2 (en) 2020-07-28 2023-11-28 Instabase, Inc. Systems and methods for user-specific distribution of enterprise software and compensation for user-specific monitored usage of the enterprise software
US11307963B2 (en) * 2020-07-29 2022-04-19 Instabase, Inc. Systems and methods for automatically modifying pipelined enterprise software
US11599446B2 (en) 2020-07-29 2023-03-07 Instabase, Inc. Systems and methods for automatically modifying pipelined enterprise software
US20230161569A1 (en) * 2021-11-22 2023-05-25 Xilinx, Inc. Synthesis flow for data processing engine array applications relying on hardware library packages
US11829733B2 (en) * 2021-11-22 2023-11-28 Xilinx, Inc. Synthesis flow for data processing engine array applications relying on hardware library packages

Also Published As

Publication number Publication date
US10534595B1 (en) 2020-01-14

Similar Documents

Publication Publication Date Title
US10534595B1 (en) Techniques for configuring and validating a data pipeline deployment
US11687551B2 (en) Automatically executing tasks and configuring access control lists in a data transformation system
US20180276283A1 (en) Providing full data provenance visualization for versioned datasets
US11573776B1 (en) Extensible data transformation authoring and validation system
US20240104067A1 (en) Data revision control in large-scale data analytic systems
US10102229B2 (en) Validating data integrations using a secondary data store
US11611627B2 (en) Action flow fragment management
US11789912B2 (en) Data analytic systems
US20220004577A1 (en) Techniques for visualizing dependencies in a data analytics system
EP3441895A1 (en) Processing streaming data in a transaction-based distributed database system
US11863384B2 (en) Automatic derivation of repository access data based on symbolic configuration
US20220147345A1 (en) Automatic modification of repository files
KR20230133600A (en) method for managing multi cloud and electronic device

Legal Events

Date Code Title Description
AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:PALANTIR TECHNOLOGIES INC.;REEL/FRAME:052856/0817

Effective date: 20200604

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: WELLS FARGO BANK, N.A., NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:PALANTIR TECHNOLOGIES INC.;REEL/FRAME:060572/0506

Effective date: 20220701