WO2020076546A1 - Correlated incremental loading of multiple data sets for an interactive data prep application - Google Patents
Correlated incremental loading of multiple data sets for an interactive data prep application Download PDFInfo
- Publication number
- WO2020076546A1 WO2020076546A1 PCT/US2019/053935 US2019053935W WO2020076546A1 WO 2020076546 A1 WO2020076546 A1 WO 2020076546A1 US 2019053935 W US2019053935 W US 2019053935W WO 2020076546 A1 WO2020076546 A1 WO 2020076546A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- pane
- flow
- rows
- user
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title description 3
- 230000002596 correlated effect Effects 0.000 title description 2
- 238000010586 diagram Methods 0.000 claims abstract description 57
- 238000000034 method Methods 0.000 claims abstract description 51
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 26
- 230000006870 function Effects 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000000737 periodic effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 36
- 238000013079 data visualisation Methods 0.000 description 26
- 230000009471 action Effects 0.000 description 24
- 238000004364 calculation method Methods 0.000 description 23
- 230000008859 change Effects 0.000 description 23
- 238000002360 preparation method Methods 0.000 description 21
- 230000000875 corresponding effect Effects 0.000 description 20
- 230000014509 gene expression Effects 0.000 description 18
- 239000000284 extract Substances 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 15
- 230000036961 partial effect Effects 0.000 description 14
- 238000013515 script Methods 0.000 description 13
- 230000003993 interaction Effects 0.000 description 12
- 238000009826 distribution Methods 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 10
- 230000001680 brushing effect Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 230000002776 aggregation Effects 0.000 description 6
- 238000004220 aggregation Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 241001508687 Mustela erminea Species 0.000 description 5
- 238000005304 joining Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000001960 triggered effect Effects 0.000 description 5
- 238000012800 visualization Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008676 import Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 238000000844 transformation Methods 0.000 description 4
- 239000002131 composite material Substances 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000011144 upstream manufacturing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 238000012854 evaluation process Methods 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 241000282341 Mustela putorius furo Species 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000037406 food intake Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 231100000957 no side effect Toxicity 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005201 scrubbing Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/252—Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/26—Visual data mining; Browsing structured data
Definitions
- the disclosed implementations relate generally to data visualization and more specifically to systems, methods, and user interfaces to prepare and curate data for use by a data visualization application.
- Data visualization applications enable a user to understand a data set visually, including distribution, trends, outliers, and other factors that are important to making business decisions.
- Some data sets are very large or complex, and include many data fields.
- Various tools can be used to help understand and analyze the data, including dashboards that have multiple data visualizations.
- data frequently needs to manipulated or massaged to put it into a format that can be easily used by data visualization applications.
- ETL Extract / Transform / Load
- Data flow style systems focus the user on the operations and flow of the data through the system, which helps provide clarity on the overall structure of the job, and makes it easy for the user to control those steps.
- These systems generally do a poor job of showing the user their actual data, which can make it difficult for users to actually understand what is or what needs to be done to their data.
- These systems can also suffer from an explosion of nodes. When each small operation gets its own node in a diagram, even a moderately complex flow can turn into a confusing rat’s nest of nodes and edges.
- Potter’s Wheel style systems present the user with a very concrete spreadsheet-style interface to their actual data, and allow the user to sculpt their data through direct actions. While users are actually authoring a data flow in these systems, that flow is generally occluded, making it hard for the user to understand and control the overall structure of their job.
- some data preparation tools load the data very slowly. For example, there may be multiple queries that run synchronously, so a user has to wait for all of the data to load. Some systems attempt to reduce the perception of slowness by running the queries asynchronously to load the data. However, asynchronous loading still precludes user interaction with the data and the interface may display inconsistent data as the interface displays data for each of the separate asynchronous queries independently.
- Disclosed implementations address the problems with existing data preparation tools in several ways.
- Running multiple asynchronous queries reduces the time to load the data, and the data from the multiple queries is coordinated so that the user interface always displays consistent data.
- a user can immediately interact with the data to make desired changes. The changes are applied to the data that is already displayed, and as new data from the queries arrives, the same changes are applied to the new rows of data as well.
- a computer system for preparing data for subsequent analysis has one or more processors and memory.
- the memory stores one or more programs configured for execution by the one or more processors.
- the one or more programs comprise executable instructions.
- the system displays a user interface that includes a data flow pane, a profile pane, and a data pane.
- the data flow pane displays a node/link flow diagram that identifies a data source. For each of multiple queries against the data source, the system issues the query against the data source asynchronously with an initial block size that specifies a number of rows.
- the system repeats the query asynchronously with an updated block size until all of the rows satisfying the query have been retrieved.
- the system stores retrieved rows satisfying the respective query in a local cache.
- Periodically e.g., based on a timer or triggered by receipt of query results from one of the queries
- the system determines a unique identifier that identifies rows from the data source that have been retrieved and stored in the local cache for all of the queries. This unique identifier is sometimes referred to as a high water mark.
- the system updates the profile pane to display data value histograms for multiple data fields in the data source.
- Each bar in each data value histogram indicates a count of rows from the data source that (i) are specified by the unique identifier and (ii) have a single specific data value or range of data values for a respective data field. In this way, the system provides a consistent view of data in the profile pane while multiple independent queries run asynchronously.
- each repeat of a respective query against the data source specifies a block size that is larger than the previous block size for the respective query. In some implementations, each repeat of a respective query against the data source specifies a block size that is twice the size of the previous block size for the respective query.
- the periodic determination of the unique identifier is throttled so that it occurs not more than once each second.
- the system updates rows of data from the data source displayed in the data pane according to the unique identifier.
- a first node in the flow diagram is initially selected, and the data value histograms displayed in the profile pane correspond to a computed data set for the first node.
- a user selects, while the asynchronous queries are running, a second node in the flow diagram.
- the system updates the profile pane to display new data value histograms for a plurality of data fields from a result set at the second node.
- Each bar in each data value histogram indicates a count of rows from the result set that have a single specific data value or range of data values for a respective data field.
- the unique identifier is a primary key value of a primary key field for the data source, and a row from the data source is specified by the unique identifier when a key value corresponding to the row is less than the primary key value.
- the unique identifier is a high water row number, and a row from the data source is specified by the unique identifier when a row number corresponding to the row is less than or equal to the high water row number.
- each of the queries has a same sort order.
- a user modifies data displayed in the profile pane.
- the system translates the user input into an operation applied to the retrieved rows from the data source and stores a definition of the operation. Updating the profile pane when the unique identifier changes comprises applying the defined operation to rows retrieved by the queries.
- a user can make a variety of changes to data in the profile pane.
- the user input is selection of a single bar for a data value histogram corresponding to a first data value bin for a first data field, thereby filtering the displayed data in the profile pane to rows from the data source whose data values for the first field correspond to the first data value bin.
- the stored operation applies a filter that filters the displayed data in the profile pane to rows from the data source whose data values for the first field correspond to the first data value bin.
- the user input removes a data value histogram, corresponding to a first data field, from the profile pane. Updating the profile pane when the unique identifier changes comprises omitting the first data field from the data pane.
- the user input adds a computed column to the profile pane with a corresponding data value histogram, computed as a function of one or more other columns retrieved by the queries.
- Updating the profile pane when the unique identifier changes comprises updating the data value histogram for the computed column according to the function and according to the additional rows retrieved from the data source.
- the user input renames a first data column in the profile pane to a new name. Updating the profile pane when the unique identifier changes comprises retaining the new name for the first data column.
- the user input converts a data type for a first data column in the profile pane to a new data type according to a conversion function.
- Updating the profile pane when the unique identifier changes comprises applying the conversion function to the first data column for the additional rows retrieved from the data source.
- the user input removes a histogram bar corresponding to a bin for a first data column in the profile pane.
- Updating the profile pane when the unique identifier changes comprises removing any of the additional rows retrieved when the rows have a data value for the first data column matching the bin.
- each bin corresponds to an individual data value or a continuous range of data values.
- a process refactors a flow diagram.
- the process is performed at a computer system having a display, one or more processors, and memory storing one or more programs configured for execution by the one or more processors.
- the process includes displaying a user interface that includes a plurality of panes, including a data flow pane and a palette pane.
- the data flow pane includes a flow diagram having a plurality of existing nodes, each node specifying a respective operation to retrieve data from a respective data source, specifying a respective operation to transform data, or specifying a respective operation to create a respective output data set.
- the palette pane includes a plurality of flow element templates.
- the process further includes receiving a first user input to select an existing node from the flow diagram or a flow element template from the palette pane, and in response to the first user input: (i) displaying a moveable icon representing a new node for placement in the flow diagram, where the new node specifies a data flow operation corresponding to the selected existing node or the selected flow element template, and (ii) displaying one or more drop targets in the flow diagram according to dependencies between the data flow operation of the new node and operations of the plurality of existing nodes.
- the process further includes receiving a second user input to place the moveable icon over a first drop target of the drop targets, and ceasing to detect the second user input. In response to ceasing to detect the second user input, the process inserts the new node into the flow diagram at the first drop target. The new node performs the specified data flow operation.
- each of the existing nodes has a respective intermediate data set computed according to the specified respective operation and inserting the new node into the flow diagram at the first drop target includes computing an intermediate data set for the new node according to the specified data flow operation.
- the new node is placed in the flow diagram after a first existing node having a first intermediate data set, and computing the intermediate data set for the new node includes applying the data flow operation to the first intermediate data set.
- the new node has no predecessor in the flow diagram, and computing the intermediate data set for the new node includes retrieving data from a data source to form the intermediate data set.
- the process further includes, in response to ceasing to detect the second user input, displaying a sampling of data from the intermediate data set in a data pane of the user interface.
- the data pane is one of the plurality of panes.
- the data flow operation filters rows of data based on values of a first data field, and displaying the one or more drop targets includes displaying one or more drop targets immediately following existing nodes whose intermediate data sets include the first data field.
- the first user input selects an existing node from the flow diagram, and inserting the new node into the flow diagram at the first drop target creates a copy of the existing node.
- inserting the new node into the flow diagram at the first drop target further includes removing the existing node from the flow diagram.
- the data flow operation includes a plurality of operations that are executed in a specified sequence.
- a non-transitory computer readable storage medium stores one or more programs configured for execution by a computer system having one or more processors, memory, and a display.
- the one or more programs include instructions for implementing a system that refactors a flow diagram as described herein.
- a computer system prepares data for analysis.
- the computer system includes one or more processors, memory, and one or more programs stored in the memory.
- the programs are configured for execution by the one or more processors.
- the programs display a user interface for a data preparation application.
- the user interface includes a data flow pane, a tool pane, a profile pane, and a data pane.
- the data flow pane displays a node/link flow diagram that identifies data sources, operations, and output data sets.
- the tool pane includes a data source selector that enables users to add data sources to the flow diagram, includes an operation palette that enables users to insert nodes into the flow diagram for performing specific transformation operations, and a palette of other flow diagrams that a user can incorporate into the flow diagram.
- the profile pane displays schemas corresponding to selected nodes in the flow diagram, including information about data fields and statistical information about data values for the data fields and enables users to modify the flow diagram by interacting with individual data elements.
- the data pane displays rows of data corresponding to selected nodes in the flow diagram, and enables users to modify the flow diagram by interacting with individual data values.
- the information about data fields displayed in the profile pane includes data ranges for a first data field.
- a new node is added to the flow diagram that filters data to the first data range.
- the profile pane enables users to map the data ranges for the first data field to specified values, thereby adding a new node to the flow diagram that performs the user-specified mapping.
- a node in response to a first user interaction with a first data value in the data pane, a node is added to the flow diagram that filters the data to the first data value.
- a new node in response to a user modification of a first data value of a first data field in the data pane, a new node is added to the flow diagram that performs the modification to each row of data whose data value for the first data field equals the first data value.
- a node in response to a first user action on a first data field in the data pane, a node is added to the flow diagram that splits the first data field into two or more separate data fields.
- a new operation is added to the operation palette, the new operation corresponding to the first node.
- the profile pane and data pane are configured to update asynchronously as selections are made in the data flow pane.
- the information about data fields displayed in the profile pane includes one or more histograms that display distributions of data values for data fields.
- a method executes at an electronic device with a display.
- the electronic device can be a smart phone, a tablet, a notebook computer, or a desktop computer.
- the method implements any of the computer systems described herein.
- a non-transitory computer readable storage medium stores one or more programs configured for execution by a computer system having one or more processors, memory, and a display.
- the one or more programs include instructions for implementing a system that prepares data for analysis as described herein.
- Figure 1 illustrates a graphical user interface used in some implementations.
- Figure 2 is a block diagram of a computing device according to some implementations.
- Figures 3 A and 3B illustrate user interfaces for a data preparation application in accordance with some implementations.
- Figure 3C describes some features of the user interfaces shown in Figures 3A and 3B.
- Figure 3D illustrates a sample flow diagram in accordance with some implementations.
- Figure 3E illustrates a pair of flows that work together but run at different frequencies, in accordance with some implementations.
- Figures 4A - 4V illustrate using a data preparation application to build a join in accordance with some implementations.
- Figure 5A illustrates a portion of a log file in accordance with some implementations.
- Figure 5B illustrates a portion of a lookup table in accordance with some implementations.
- Figures 6A - 6C illustrate some operations, inputs, and output for a flow, in accordance with some implementations.
- Figures 7 A and 7B illustrate some components of a data preparation system, in accordance with some implementations.
- Figure 7C illustrate evaluating a flow, either for analysis or execution, in accordance with some implementations.
- Figure 7D schematically represents an asynchronous sub-system used in some data preparation implementations.
- Figure 8A illustrates a sequence of flow operations in accordance with some implementations.
- Figure 8B illustrates three aspects of a type system in accordance with some implementations.
- Figure 8C illustrates properties of a type environment in accordance with some implementations.
- Figures 8D illustrates simple type checking based on a flow with all data types known, in accordance with some implementations.
- Figures 8E illustrates a simple type failure with types fully known, in accordance with some implementations.
- Figure 8F illustrates simple type environment calculations for a partial flow, in accordance with some implementations.
- Figure 8G illustrates types of a packaged-up container node, in accordance with some implementations.
- Figure 8H illustrates a more complicated type environment scenario, in accordance with some implementations.
- Figure 81 illustrates reusing a more complicated type environment scenario, in accordance with some implementations.
- Figures 8J-1, 8J-2, and 8J-3 indicate the properties for many of the most commonly used operators, in accordance with some implementations.
- Figures 8K and 8L illustrate a flow and corresponding execution process, in accordance with some implementations.
- Figure 8M illustrates that running an entire flow starts with implied physical models at input and output nodes, in accordance with some implementations.
- Figure 8N illustrates that running a partial flow materializes a physical model with the results, in accordance with some implementations.
- Figure 80 illustrates running part of a flow based on previous results, in accordance with some implementations.
- Figures 8P and 8Q illustrate evaluating a flow with a pinned node 860, in accordance with some implementations.
- Figure 9 illustrates a portion of a flow diagram in accordance with some implementations.
- Figure 10 illustrates a process of establishing a high water mark for result sets retrieved from multiple asynchronous queries, in accordance with some implementations.
- Figure 11 illustrates how a data preparation user interface updates while data is being loaded from a data source, in accordance with some implementations.
- Figure 12 illustrates user interactions with partially loaded data in a data preparation user interface and subsequent updates to the user interface as additional data arrives asynchronously, in accordance with some implementations.
- Figure 13 is an example of a profile pane for a data preparation user interface, in accordance with some implementations.
- Figure 1 illustrates a graphical user interface 100 for interactive data analysis.
- the user interface 100 includes a Data tab 114 and an Analytics tab 116 in accordance with some implementations.
- the Data tab 114 When the Data tab 114 is selected, the user interface 100 displays a schema information region 110, which is also referred to as a data pane.
- the schema information region 110 provides named data elements (e.g., field names) that may be selected and used to build a data visualization.
- the list of field names is separated into a group of dimensions (e.g., categorical data) and a group of measures (e.g., numeric quantities).
- Some implementations also include a list of parameters.
- the Analytics tab 116 When the Analytics tab 116 is selected, the user interface displays a list of analytic functions instead of data elements (not shown).
- the graphical user interface 100 also includes a data visualization region 112.
- the data visualization region 112 includes a plurality of shelf regions, such as a columns shelf region 120 and a rows shelf region 122. These are also referred to as the column shelf 120 and the row shelf 122. As illustrated here, the data visualization region 112 also has a large space for displaying a visual graphic. Because no data elements have been selected yet, the space initially has no visual graphic. In some implementations, the data visualization region 112 has multiple layers that are referred to as sheets.
- FIG. 2 is a block diagram illustrating a computing device 200 that can display the graphical user interface 100 in accordance with some implementations.
- the computing device can also be used by a data preparation (“data prep”) application 250.
- Various examples of the computing device 200 include a desktop computer, a laptop computer, a tablet computer, and other computing devices that have a display and a processor capable of running a data visualization application 222.
- the computing device 200 typically includes one or more processing units/cores (CPUs) 202 for executing modules, programs, and/or instructions stored in the memory 214 and thereby performing processing operations; one or more network or other communications interfaces 204; memory 214; and one or more communication buses 212 for interconnecting these components.
- the communication buses 212 may include circuitry that interconnects and controls communications between system components.
- the computing device 200 includes a user interface 206 comprising a display device 208 and one or more input devices or mechanisms 210.
- the input device/mechanism includes a keyboard.
- the input device/mechanism includes a“soft” keyboard, which is displayed as needed on the display device 208, enabling a user to“press keys” that appear on the display 208.
- the display 208 and input device / mechanism 210 comprise a touch screen display (also called a touch sensitive display).
- the memory 214 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices.
- the memory 214 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. In some implementations, the memory 214 includes one or more storage devices remotely located from the CPU(s) 202. The memory 214, or alternately the non-volatile memory device(s) within the memory 214, comprises a non- transitory computer readable storage medium. In some implementations, the memory 214, or the computer readable storage medium of the memory 214, stores the following programs, modules, and data structures, or a subset thereof:
- an operating system 216 which includes procedures for handling various basic system services and for performing hardware dependent tasks;
- the device 200 to other computers and devices via the one or more communication network interfaces 204 (wired or wireless) and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
- one or more communication network interfaces 204 wireless or wireless
- one or more communication networks such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on;
- a web browser 220 (or other application capable of displaying web pages), which enables a user to communicate over a network with remote computers or devices;
- a data visualization application 222 which provides a graphical user interface 100 for a user to construct visual graphics.
- a user selects one or more data sources 240 (which may be stored on the computing device 200 or stored remotely), selects data fields from the data source(s), and uses the selected fields to define a visual graphic.
- the information the user provides is stored as a visual specification 228.
- the data visualization application 222 includes a data visualization generation module 226, which takes the user input (e.g., the visual specification 228), and generates a corresponding visual graphic (also referred to as a“data visualization” or a“data viz”).
- the data visualization application 222 displays the generated visual graphic in the user interface 100.
- the data visualization application 222 executes as a standalone application (e.g., a desktop application).
- the data visualization application 222 executes within the web browser 220 or another application using web pages provided by a web server; and
- databases or data sources 240 e.g., a first data source 240-1 and a second data source 240-2
- the data sources are stored as spreadsheet files, CSV files, XML files, or flat files, or stored in a relational database.
- the computing device 200 stores a data prep application 250, which can be used to analyze and massage data for subsequent analysis (e.g., by a data visualization application 222).
- Figure 3B illustrates one example of a user interface 251 used by a data prep application 250.
- the data prep application 250 enables user to build flows 323, as described in more detail below.
- Each of the above identified executable modules, applications, or sets of procedures may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- the above identified modules or programs i.e., sets of instructions
- the memory 214 stores a subset of the modules and data structures identified above.
- the memory 214 may store additional modules or data structures not described above.
- Figure 2 shows a computing device 200
- Figure 2 is intended more as a functional description of the various features that may be present rather than as a structural schematic of the implementations described herein.
- items shown separately could be combined and some items could be separated.
- Figures 3A and 3B illustrate a user interface for preparing data in accordance with some implementations.
- Figure 3 A shows this conceptually as a menu bar region 301, a left- hand pane 302, a flow pane 303, profile pane 304, and a data pane 305.
- the profile pane 304 is also referred to as the schema pane.
- the functionality of the“left-hand pane” 302 is in an alternate location, such as below the menu pane 301 or below the data pane 305.
- This interface provides a user with multiple streamlined, coordinated views that help the user to see and understand what they need to do.
- the flow diagram in the flow pane 303 combines and summarizes actions, making the flow more readable, and is coordinated with views of actual data in the profile pane 304 and the data pane 305.
- the data pane 305 provides representative samples of data at every point in the logical flow, and the profile pane provides histograms of the domains of the data.
- the Menu Bar 301 has a File menu with options to create new data flow specifications, save data flow specifications, and load previously created data flow specifications.
- a flow specification is referred to as a flow.
- a flow specification describes how to manipulate input data from one or more data sources to create a target data set. The target data sets are typically used in subsequent data analysis using a data visualization application.
- the Left-Hand Pane 302 includes a list of recent data source connections as well as a button to connect to a new data source.
- the Flow Pane 303 includes a visual representation
- the flow is a node/link diagram showing the data sources, the operations that are performed, and target outputs of the flow.
- Some implementations provide flexible execution of a flow by treating portions of the flow as declarative queries. That is, rather than having a user specify every computational detail, a user specifies the objective (e.g., input and output). The process that executes the flow optimizes plans to choose execution strategies that improve performance. Implementations also allow users to selectively inhibit this behavior to control execution.
- the objective e.g., input and output
- the Profile Pane 304 displays the schema and relevant statistics and/or visualizations for the nodes selected in the Flow Pane 303. Some implementations support selection of multiple nodes simultaneously, but other implementations support selection of only a single node at a time.
- the Data Pane 305 displays row-level data for the selected nodes in the Flow Pane 303.
- a user creates a new flow using a“File -> New Flow” option in the Menu Bar. Users can also add data sources to a flow.
- a data source is a relational database.
- one or more data sources are file-based, such as CSV files or spreadsheet files.
- a user adds a file-based source to the flow using a file connection affordance in the left-hand pane 302. This opens a file dialog that prompts the user to choose a file.
- the left hand pane 302 also includes a database connection affordance, which enables a user to connect to a database (e.g., an SQL database).
- the schema for the node is displayed in the Profile Pane 304.
- the profile pane 304 includes statistics or visualizations, such as distributions of data values for the fields (e.g., as histograms or pie charts).
- schemas for each of the selected nodes are displayed in the profile pane 304.
- the data for the node is displayed in the Data Pane 305.
- the data pane 305 typically displays the data as rows and columns.
- Implementations make it easy to edit the flow using the flow pane 303, the profile pane 304, or the data pane 305.
- some implementations enable a right click operation on a node/table in any of these three panes and add a new column based on a scalar calculation over existing columns in that table.
- the scalar operation could be a mathematical operation to compute the sum of three numeric columns, a string operation to concatenate string data from two columns that are character strings, or a conversion operation to convert a character string column into a date column (when a date has been encoded as a character string in the data source).
- a right-click menu (accessed from a table/node in the Flow Pane 303, the Profile Pane 304, or the Data Pane 305) provides an option to“Create calculated field...” Selecting this option brings up a dialog to create a calculation.
- the calculations are limited to scalar computations (e.g., excluding aggregations, custom Level of Detail calculations, and table calculations).
- the user interface adds a calculated node in the Flow Pane 303, connects the new node to its antecedent, and selects this new node.
- the flow pane 303 adds scroll boxes.
- nodes in the flow diagram can be grouped together and labeled, which is displayed hierarchically (e.g., showing a high-level flow initially, with drill down to see the details of selected nodes).
- a user can also remove a column by interacting with the Flow Pane 303, the
- a user can select a node and choose“Output As” to create a new output dataset. In some implementations, this is performed with a right click. This brings up a file dialog that lets the user select a target file name and directory (or a database and table name). Doing this adds a new node to the Flow Pane 303, but does not actually create the target datasets.
- a target dataset has two components, including a first file (a Tableau Data Extract or TDE) that contains the data, and a corresponding index or pointer entry (a Tableau Data Source or TDS) that points to the data file.
- the actual output data files are created when the flow is run.
- a user runs a flow by choosing“File -> Run Flow” from the Menu Bar 301. Note that a single flow can produce multiple output data files.
- the flow diagram provides visual feedback as it runs.
- the Menu Bar 301 includes an option on the“File” menu to“Save” or“Save As,” which enables a user to save the flow.
- a flow is saved as a“.loom” file. This file contains everything needed to recreate the flow on load. When a flow is saved, it can be reloaded later using a menu option to“Load” in the“File” menu. This brings up a file picker dialog to let the user load a previous flow.
- Figure 3B illustrates a user interface for data preparation, showing the user interface elements in each of the panes.
- the menu bar 311 includes one or more menus, such as a File menu and an Edit menu. Although the edit menu is available, more changes to the flow are performed by interacting with the flow pane 313, the profile pane 314, or the data pane 315.
- the left-hand pane 312 includes a data source palette/selector, which includes affordances for locating and connecting to data.
- the set of connectors includes extract-only connectors, including cubes. Implementations can issue custom SQL expressions to any data source that supports it.
- the left-hand pane 312 also includes an operations palette, which displays operations that can be placed into the flow. This includes arbitrary joins (of arbitrary type and with various predicates), union, pivot, rename and restrict column, projection of scalar calculations, filter, aggregation, data type conversion, data parse, coalesce, merge, split, aggregation, value replacement, and sampling.
- Some implementations also support operators to create sets (e.g., partition the data values for a data field into sets), binning (e.g., grouping numeric data values for a data field into a set of ranges), and table calculations (e.g., calculate data values (e.g., percent of total) for each row that depend not only on the data values in the row, but also other data values in the table).
- sets e.g., partition the data values for a data field into sets
- binning e.g., grouping numeric data values for a data field into a set of ranges
- table calculations e.g., calculate data values (e.g., percent of total) for each row that depend not only on the data values in the row, but also other data values in the table).
- the left-hand pane 312 also includes a palette of other flows that can be incorporated in whole or in part into the current flow. This enables a user to reuse components of a flow to create new flows. For example, if a portion of a flow has been created that scrubs a certain type of input using a combination of 10 steps, that 10 step flow portion can be saved and reused, either in the same flow or in completely separate flows.
- the flow pane 313 displays a visual representation (e.g., node/link flow diagram) 323 for the current flow.
- the Flow Pane 313 provides an overview of the flow, which serves to document the process. In many existing products, a flow is overly complex, which hinders comprehension. Disclosed implementations facilitate understanding by coalescing nodes, keeping the overall flow simpler and more concise. As noted above, as the number of nodes increases, implementations typically add scroll boxes. The need for scroll bars is reduced by coalescing multiple related nodes into super nodes, which are also called container nodes. This enables a user to see the entire flow more conceptually, and allows a user to dig into the details only when necessary.
- the flow pane 313 when a“super node” is expanded, the flow pane 313 shows just the nodes within the super node, and the flow pane 313 has a heading that identifies what portion of the flow is being display. Implementations typically enable multiple hierarchical levels. A complex flow is likely to include several levels of node nesting.
- the profile pane 314 includes schema information about the data at the currently selected node (or nodes) in the flow pane 313.
- the schema information provides statistical information about the data, such as a histogram 324 of the data distribution for each of the fields.
- a user can interact directly with the profile pane to modify the flow 323 (e.g., by selecting a data field for filtering the rows of data based on values of that data field).
- the profile pane 314 also provides users with relevant data about the currently selected node (or nodes) and visualizations that guide a user’s work. For example, histograms 324 show the distributions of the domains of each column. Some implementations use brushing to show how these domains interact with each other.
- An example here illustrates how the process is different from typical implementations by enabling a user to directly manipulate the data in a flow.
- a user wants to exclude California from consideration.
- a user selects a“filter” node, places the filter into the flow at a certain location, then brings up a dialog box to enter the calculation formula, such as“state name ⁇ >‘CA” ⁇
- the user can see the data value in the profile pane 314 (e.g., showing the field value‘CA’ and how many rows have that field value) and in the data pane 315 (e.g., individual rows with‘CA’ as the value for state name).
- the user can right click on“CA” in the list of state names in the Profile Pane 314 (or in the Data Pane 315), and choose“Exclude” from a drop down.
- the user interacts with the data itself, not a flow element that interacts with the data.
- Implementations provide similar functionality for calculations, joins, unions, aggregates, and so on. Another benefit of the approach is that the results are immediate.
- “CA” is filtered out, the filter applies immediately. If the operation takes some time to complete, the operation is performed asynchronously, and the user is able to continue with work while the job runs in the background.
- the data pane 315 displays the rows of data corresponding to the selected node or nodes in the flow pane 313. Each of the columns 315 corresponds to one of the data fields.
- a user can interact directly with the data in the data pane to modify the flow 323 in the flow pane 313.
- a user can also interact directly with the data pane to modify individual field values.
- the user interface applies the same change to all other values in the same column whose values (or pattern) match the value that the user just changed. For example, if a user changed“WA” to“Washington” for one field value in a State data column, some implementations update all other“WA” values to“Washington” in the same column.
- Some implementations go further to update the column to replace any state abbreviations in the column to be full state names (e.g., replacing“OR” with“Oregon”).
- the user is prompted to confirm before applying a global change to an entire column.
- a change to one value in one column can be applied (automatically or pseudo-automatically) to other columns as well.
- a data source may include both a state for residence and a state for billing. A change to formatting for states can then be applied to both.
- the sampling of data in the data pane 315 is selected to provide valuable information to the user. For example, some implementations select rows that display the full range of values for a data field (including outliers). As another example, when a user has selected nodes that have two or more tables of data, some implementations select rows to assist in joining the two tables. The rows displayed in the data pane 315 are selected to display both rows that match between the two tables as well as rows that do not match. This can be helpful in determining which fields to use for joining and/or to determine what type of join to use (e.g., inner, left outer, right outer, or full outer).
- FIG. 3C illustrates some of the features shown in the user interface, and what is shown by the features.
- the flow diagram 323 is always displayed in the flow pane 313.
- the profile pane 314 and the data pane 315 are also always shown, but the content of these panes changes based on which node or nodes are selected in the flow pane 313.
- a selection of a node in the flow pane 313 brings up one or more node specific panes (not illustrated in Figure 3A or Figure 3B).
- a node specific pane is in addition to the other panes.
- node specific panes are displayed as floating popups, which can be moved.
- node specific panes are displayed at fixed locations within the user interface.
- the left-hand pane 312 includes a data source palette / chooser for selecting or opening data sources, as well as an operations palette for selecting operations that can be applied to the flow diagram 323.
- Some implementations also include an“other flow palette,” which enables a user to import all or part of another flow into the current flow 323.
- a flow diagram 323 provides an easy, visual way to understand how the data is getting processed, and keeps the process organized in a way that is logical to a user.
- a user can perform various tasks, including:
- the profile pane 314 provides a quick way for users to figure out if the results of the transforms are what they expect them to be. Outliers and incorrect values typically“pop out” visually based on comparisons with both other values in the node or based on comparisons of values in other nodes.
- the profile pane helps users ferret out data problems, regardless of whether the problems are caused by incorrect transforms or dirty data. In addition to helping users find the bad data, the profile pane also allows direct interactions to fix the discovered problems.
- the profile pane 314 updates asynchronously. When a node is selected in the flow pane, the user interface starts populating partial values (e.g., data value distribution histograms) that get better as time goes on. In some implementations, the profile pane includes an indicator to alert the user whether is complete or not. With very large data sets, some implementations build a profile based on sample data only.
- a user can perform various tasks, including:
- the data pane 315 provides a way for users to see and modify rows that result from the flows.
- the data pane selects a sampling of rows corresponding to the selected node (e.g., a sample of 10, 50, or 100 rows rather than a million rows).
- the rows are sampled in order to display a variety of features.
- the rows are sampled statistically, such as every nth row.
- the data pane 315 is typically where a user cleans up data (e.g., when the source data is not clean). Like the profile pane, the data pane updates asynchronously. When a node is first selected, rows in the data pane 315 start appearing, and the sampling gets better as time goes on. Most data sets will only have a subset of the data available here (unless the data set is small).
- a user can perform various tasks, including:
- Sort for navigation A user can sort the data in the data pane based on a column, which has no effect on the flow. The purpose is to assist in navigating the data in the data pane.
- a user can also create a filter that applies to the flow. For example, a user can select an individual data value for a specific data field, then take action to filter the data according to that value (e.g., exclude that value or include only that value). In this case, the user interaction creates a new node in the data flow 323.
- Some implementations enable a user to select multiple data values in a single column, and then build a filter based on the set of selected values (e.g., exclude the set or limit to just that set).
- Modify row data A user can directly modify a row. For example, change a data value for a specific field in a specific row from 3 to 4.
- a node specific pane displays information that is particular for a selected node in the flow. Because a node specific pane is not needed most of the time, the user interface typically does not designate a region with the user interface that is solely for this use. Instead, a node specific pane is displayed as needed, typically using a popup that floats over other regions of the user interface. For example, some implementations use a node specific pane to provide specific user interfaces for joins, unions, pivoting, unpivoting, running Python scripts, parsing log files, or transforming a JSON objects into tabular form.
- the Data Source Palette/Chooser enables a user to bring in data from various data sources.
- the data source palette/chooser is in the left-hand pane 312.
- a user can perform various tasks with the data source palette/chooser, including:
- a data source which can be an SQL database, a data file such as a CSV or spreadsheet, a non-relational database, a web service, or other data source.
- connection properties A user can specify credentials and other properties needed to connect to data sources.
- the properties include selection of specific data (e.g., a specific table in a database or a specific sheet from a workbook file).
- the left hand pane 312 provides an operations palette, which allows a user to invoke certain operations. For example, some implementations include an option to“Call a Python Script” in the operations palette.
- the operations palette provides a list of known operations (including user defined operations), and allows a user to incorporate the operations into the flow using user interface gestures (e.g., dragging and dropping).
- Some implementations provide an Other Flow Palette/Chooser, which allows users to easily reuse flows they’ve built or flows other people have built.
- the other flow palette provides a list of other flows the user can start from, or incorporate.
- Some implementations support selecting portions of other flows in addition to selecting entire flows.
- a user can incorporate other flows using user interface gestures, such as dragging and dropping.
- the node internals specify exactly what operations are going on in a node. There is sufficient information to enable a user to“refactor” a flow or understand a flow in more detail. A user can view exactly what is in the node (e.g., what operations are performed), and can move operations out of the node, into another node.
- Some implementations include a project model, which allows a user to group together multiple flows into one“project” or“workbook.” For complex flows, a user may split up the overall flow into more understandable components.
- operations status is displayed in the left-hand pane 312. Because many operations are executed asynchronously in the background, the operations status region indicates to the user what operations are in progress as well as the status of the progress (e.g., 1% complete, 50% complete, or 100% complete). The operations status shows what operations are going on in the background, enables a user to cancel operations, enables a user to refresh data, and enables a user to have partial results run to completion.
- a flow such as the flow 323 in Figure 3B, represents a pipeline of rows that flow from original data sources through transformations to target datasets.
- Figure 3D illustrates a simple example flow 338. This flow is based on traffic accidents involving vehicles.
- the relevant data is stored in an SQL database in an accident table and a vehicle table.
- a first node 340 reads data from the accident table
- a second node 344 reads the data from the vehicle table.
- the accident table is normalized (342) and one or more key fields are identified (342).
- one or more key fields are identified (346) for the vehicle data.
- the two tables are joined (348) using a shared key, and the results are written (350) to a target data set.
- the accident table and vehicle table are both in the same SQL database, an alternative is to create a single node that reads the data from the two tables in one query.
- the query can specify what data fields to select and whether the data should be limited by one or more filters (e.g., WHERE clauses).
- WHERE clauses e.g., WHERE clauses.
- the data is retrieved and joined locally as indicated in the flow 338 because the data used to join the tables needs to be modified.
- the primary key of the vehicle table may have an integer data type whereas the accident table may specify the vehicles involved using a zero-padded character field.
- a flow abstraction like the one shown in Figure 3D is common to most ETL and data preparation products.
- This flow model gives users logical control over their transformations.
- Such a flow is generally interpreted as an imperative program and executed with little or no modification by the platform. That is, the user has provided the specific details to define physical control over the execution.
- a typical ETL system working on this flow will pull down the two tables from the database exactly as specified, shape the data as specified, join the tables in the ETL engine, and then write the result out to the target dataset.
- Full control over the physical plan can be useful, but forecloses the system’s ability to modify or optimize the plan to improve performance (e.g., execute the preceding flow at the SQL server).
- Most of the time customers do not need control of the execution details, so implementations here enable operations to be expressed declaratively.
- Some implementations here span the range from fully-declarative queries to imperative programs.
- Some implementations utilize an internal analytical query language (AQL) and a Federated Evaluator.
- AQL analytical query language
- a flow is interpreted as a single declarative query specification whenever possible.
- This declarative query is converted into AQL and handed over to a Query Evaluator, which ultimately divvies up the operators, distributes, and executes them.
- the entire flow can be cast as a single query. If both tables come from the same server, this entire operation would likely be pushed to the remote database, achieving a significant performance benefit.
- the flexibility not only enables optimization and distribution flow execution, it also enables execution of queries against live data sources (e.g., from a transactional database, and not just a data warehouse).
- Figures 4A - 4 V illustrate some aspects of adding a join to a flow in accordance with some implementations.
- the user interface includes a left pane 312, a flow area 313, a profile area 314, and a data grid 315.
- the user first connects to an SQL database using the connection palette in the left pane 312.
- the database includes Fatality Analysis Reporting System (FARS) data provided by the National Highway Traffic Safety Administration.
- FARS Fatality Analysis Reporting System
- Figure 4B a user selects the“Accidents” table 404 from the list 402 of available tables.
- the user drags the Accident table icon 406 to the flow area 313. Once the table icon 406 is dropped in the flow area 313, a node 408 is created to represent the table, as shown in Figure 4D.
- data for the Accident table is loaded, and profile information for the accident table is displayed in the profile pane 314.
- the profile pane 314 provides distribution data for each of the columns, including the state column 410, as illustrated in Figure 4E.
- each column of data in the profile pane displays a histogram to show the distribution of data. For example, California, Florida, and Georgia have a large number of accidents, whereas Delaware has a small number of accidents.
- the profile pane makes it easy to identify columns that are keys or partial keys using key icons 412 at the top of each column.
- Figure 4F some implementations user three different icons to specify whether a column is a database key, a system key 414, or“almost” a system key 416.
- a column is almost a system key when the column in conjunction with one or more other columns is a system key.
- a column is almost a system key if the column would be a system key if null valued rows were excluded. In his example, both“ST Case” and“Case Number” are almost system keys.
- FIG 4G a user has selected the“Persons” table 418 in the left pane 312.
- the user drags the persons table 418 to the flow area 313, which is displayed as a moveable icon 419 while being dragged.
- a Persons node 422 is created in the flow area, as illustrated in Figure 41.
- the profile pane 314 splits into two portions: the first portion 420 shows the profile information for the Accidents node 408 and the second portion 421 shows the profile information for the Persons node 422.
- Figure 4J provides a magnified view of the flow pane 313 and the profile pane 314.
- the profile pane 314 includes an option 424 to show join column candidates (i.e., possibilities for joining the data from the two nodes). After selecting this option, data fields that are join candidates are displayed in the profile pane 314, as illustrated in Figure 4K. Because the join candidates are now displayed, the profile pane 314 displays an option 426 to hide join column candidates.
- the profile pane 314 indicates (430) that the column ST case in the Persons table might be joined with the ST case field in the Accidents table.
- the profile pane also indicates (428) that there are three additional join candidates in the Accidents table and indicates (432) that there are two additional join candidates in the Persons table.
- Figure 4N illustrates an alternative method of joining the data for multiple nodes.
- a user has loaded the Accidents table data 408 and the Populations table data 441 into the flow area 313.
- a join is automatically created and a Join Experience pane 442 is displayed that enables a user to review and/or modify the join.
- the Join Experience is placed in the profile pane 314; in other implementations, the Join Experience temporarily replaces the profile pane 314.
- a new node 440 is added to the flow, which displays graphically the creation of a connection between the two nodes 408 and 441.
- the Join Experience 442 includes a toolbar area 448 with various icons, as illustrated in Figure 40.
- the interface identifies which fields in each table are join candidates.
- Some implementations include a favorites icon 452, which displays of highlights“favorite” data fields (e.g., either previously selected by the user, previously identified as important by the user, or previously selected by users generally).
- the favorites icon 452 is used to designate certain data fields as favorites. Because there is limited space to display columns in the profile pane 314 and the data pane 315, some implementations use the information on favorite data fields to select which columns are displayed by default.
- selection of the“show keys” icon 454 causes the interface to identify which data columns are keys or parts of a key that consists of multiple data fields.
- Some implementations include a data/metadata toggle icon 456, which toggles the display from showing the information about the data to showing information about the metadata. In some implementations, the data is always displayed, and the metadata icon 456 toggles whether or not the metadata is displayed in addition to the data.
- Some implementations include a data grid icon 458, which toggles display of the data grid 315. In Figure 40, the data grid is currently displayed, so selecting the data grid icon 458 would cause the data grid to not display. Implementations typically include a search icon 460 as well, which brings up a search window. By default, a search applies to both data and metadata (e.g., both the names of data fields as well as data values in the fields). Some implementations include the option for an advanced search to specify more precisely what is searched.
- join experience 442 On the left of the join experience 442 is a set of join controls, including a specification of the join type 464.
- a join is typically a left outer join, an inner join, a right outer join, or a full outer join. These are shown graphically by the join icons 464. The current join type is highlighted, but the user can change the type of the join by selecting a different icon.
- Some implementations provide a join clause overview 466, which displays both the names of the fields on both sides of the join, as well as histograms of data values for the data fields on both sides of the join. When there are multiple data fields in the join, some implementations display all of the relevant data fields; other implementations include a user interface control (not shown) to scroll through the data fields in the join. Some implementations also include an overview control 468, which illustrates how many rows from each of the tables are joined based on the type of join condition. Selection of portions within this control determines what is displayed in the profile pane 314 and the data grid 315.
- Figures 4P, 4Q, and 4R illustrate alternative user interfaces for the join control area 462.
- the join type appears at the top.
- the upper portion of Figure 4Q appears in Figure 4U below.
- Figure 4R includes a lower portion that shows how the two tables are related.
- the split bar 472 represents the rows in the Accidents table
- the split bar 474 represents the Populations table.
- the large bar 477 in the middle represents the rows that are connected by an inner join between the two tables.
- the join result set 476 also includes a portion 478 that represents rows of the Accidents table that are not linked to any rows of the Populations table.
- At the bottom is another rectangle 480, which represents rows of the Populations tables that are not linked to any rows of the Accidents table.
- the portion 480 is not included in the result set 476 (the rows in the bottom rectangle 480 would be included in a right outer join or a full outer join).
- a user can select any portion of this diagram, and the selected portion is displayed in the profile pane and the data pane. For example, a user can select the“left outer portion” rectangle 478, and then look at the rows in the data pane to see if those rows are relevant to the user’s analysis.
- Figure 4S shows a Join Experience using the join control interface elements illustrated in Figure 4R, including the join control selector 464.
- the left outer join icon 482 is highlighted, as shown more clearly in the magnified view of Figure 4T.
- the first table is the Accident table
- the second table is the Factor table.
- the interface shows both the number of rows that are joined 486 and the number that are not joined 488. This example has a large number of rows that are not joined.
- the user can select the not joined bar 488 to bring up the display in Figure 4V.
- the nulls are a result of a left- outer join and non-matching values due to the fact that the Factor table has no entries prior to 2010
- Disclosed implementations support many features that assist in a variety of scenarios. Many of the features have been described above, but some of the following scenarios illustrate the features.
- Alex works in IT, and one of his jobs is to collect and prepare logs from the machines in their infrastructure to produce a shared data set that is used for various debugging and analysis in the IT organization.
- the machines run Windows, and Alex needs to collect the Application logs. There is already an agent that runs every night and dumps CSV exports of the logs to a shared directory; each day’s data are output to a separate directory, and they are output with a format that indicates the machine name.
- a snippet from the Application log is illustrated in Figure 5A.
- Line 1 contains header information. This may or may not be the case in general.
- Alex creates a flow that reads in all of the CSV files in a given directory, and performs a jagged union on them (e.g., create a data field if it exists in at least one of the CSV files, but when the same data field exists in two or more of the CSV files, create only one instance of that data field).
- the CSV input routine does a pretty good job reading in five columns, but chokes on the quotes in the sixth column, reading them in as several columns.
- Alex then drags his target data repository into the flow pane, and wires up the output to append these records to a cache that will contain a full record of his logs.
- Alex flow queries this target dataset to find the set of machines that reported the previous day, compares this to today’s machines, and outputs an alert to Alex with a list of expected machines that did not report.
- Alex could have achieved the same result in different ways. For example:
- Alex could create two separate flows: one that performs the ingest; and one that compares each day’s machines with the previous day’s machines, and then alerts Alex with the results.
- Alex could create a flow that performs the ingest in one stage. When that is complete, Alex could execute a second flow that queries the database and compares each day to the previous day and alert Alex.
- Alex could create a flow that would have the target as both input and output. This flow would perform the ingest, write it to the database, and further aggregate to find the day’s machines. It would also query the target to get the previous day’s results, perform the comparison, and fire the alert.
- Alex knows that the machines should report overnight, so what Alex does the first thing every morning is run his flow. He then has the rest of the morning to check up on machines that did not report.
- the files don’t have uniform names.
- the accident file is named “accident.dbf’ in the years 1975-1982 and 1994-2014, but is named“accYYYY.dbf’ (where YYYY is the four-digit year) in the middle years.
- Danielle a developer at a major software company, is looking at data that represents build times. Danielle has a lot of control over the format of the data, and has produced it in a nice consumable CSV format, but wants to simply load it and append it to a database she’s created.
- the part identification number is a little problematic: one system has a hyphen in the value. Earl takes one of the values in the data pane 315, selects the hyphen, and presses delete. The interface infers a rule to remove the hyphens from this column, and inserts a rule into the flow that removes the hyphen for all of the data in that column.
- Gaston works at an investment broker in a team responsible for taking data produced by IT and digesting it so that it can be used by various teams that work with customers. IT produces various data sets that show part of a customer’s portfolio - bond positions, equity positions, etc. - but each alone is not what Gaston’s consumers need.
- Karl is a strategic account manager for a major software company. He is trying to use Tableau to visualize information about attendees at an industry conference, who they work for, who their representatives are, whether they are active or prospective customers, whether their companies are small or large, and so on.
- Karl uses a REST connector for LinkedlnTM that he’s found, and passes it to each of the email addresses in his data to retrieve the country and state for each person.
- This procedure takes the information he has (e.g., the person’s name, the person’s company, and the person’s position) and uses Linkedln’s search functionality to come up with the best result for each entry. He then joins the company and location data to the data in his Server to find the correct account.
- ETL extract, transform, and load
- Ingest data that varies widely in structure e.g., relational, semi-structured, or unstructured
- format e.g., structured storage, CSV files, or JSON files
- source e.g., from a file system or from a formal database.
- a user stores the results so that the results can be analyzed.
- TDE Tableau Data Extract
- Disclosed systems 250 give control to users. In many cases, the data prep application makes intelligent choices for the user, but the user is always able to assert control. Control often has two different facets: control over the logical ordering of operations, which is used to ensure the results are correct and match the user’s desired semantics; and physical control, which is mostly used to ensure performance.
- Disclosed data prep application 250 also provide freedom. Users can assemble and reassemble their data production components however they wish in order to achieve the shape of data they need.
- Disclosed data prep applications 250 provide incremental interaction and immediate feedback. When a user takes actions, the system provides feedback through immediate results on samples of the user’s data, as well as through visual feedback.
- ETL tools use imperative semantics. That is, a user specifies the details of every operation and the order in which to perform the operations. This gives the user complete control.
- an SQL database engine evaluates declarative queries and is able to select an optimal execution plan based on the data requested by the query.
- Disclosed implementations support both imperative and declarative operations, and a user can select between these two execution options at various levels of granularity. For example, a user may want to exercise complete control of a flow at the outset while learning about a new dataset. Later, when the user is comfortable with the results, the user may relinquish all or part of the control to the data prep application in order to optimize execution speed.
- a user can specify a default behavior for each flow (imperative or declarative) and override the default behavior on individual nodes.
- Disclosed implementations can write data to many different target databases, including a TDE, SQL Server, Oracle, Redshift, flat files, and so on.
- a flow creates a new data set in the target system.
- the flow modifies an existing dataset by appending new rows, updating existing rows, inserting rows, or deleting rows.
- Errors can occur while running a flow. Errors can include transient system issues, potential known error condition in the data, for which the user may encode corrective action, and implicit constraints that the author did not consider. Disclosed implementations generally handle these error conditions automatically when possible. For example, if the same error condition was encountered in the past, some implementations reapply a known solution. [00205] Although a flow is essentially a data transformation, implementations enable users to annotate their outputs with declarative modelling information to explain how the outputs can be used, viewed, validated, or combined. Examples include:
- Disclosed implementations generally include these components:
- AFL An Abstract Flow Language
- An execution engine interprets and executes AFL programs. In some implementations, the engine runs locally. Queries may be pushed to remote servers, but the results and further processing will be done using local resources.
- the server provides a shared, distributed execution environment for flows. This server can schedule and execute flows from many users, and can analyze and scale out AFL flows automatically.
- Some data visualization applications are able to execute data prep flows and can use TDEs or other created datasets to construct data visualizations.
- Disclosed implementations can also import some data flows created by other applications (e.g., created in an ETL tool).
- Implementations enable users to:
- a user With access to a configured Server, a user can:
- the output of a node can be directed to more than one following node.
- a user can specify whether the new node creates a fork at the selected node or is inserted as an intermediate node in the existing sequence of operations. For example, if there is currently a path from node A to node B, and the user chooses to insert a new node at A, the user can select to either create a second path to the new node, or insert the new node between A and B.
- Users can add filters to a flow of arbitrary complexity. For example, a user can click to add a filter at a point in the flow, and then enter a calculation that acts as a predicate.
- the calculation expressions are limited to scalar functions. However, some implementations enable more complex expressions, such as aggregations, table calculations, or Level of Detail expressions.
- a user can edit any filter, even if it was inferred by the system.
- all filters are represented as expressions.
- the profile pane 314 and data pane 315 provide easy ways to create filters. For example, some implementations enable a user to select one or more data values for a column in the data pane, then right-click and choose“keep only” or“exclude.” This inserts a filter into the flow at the currently selected node. The system infers an expression to implement the filter, and the expression is saved. If the user needs to modify the filter later, it is easy to do so, regardless of whether the later time is right away or a year later.
- a user can select a bucket that specifies a range of values for a data field.
- the range is typically specified as a list of values.
- the range is typically specified as a contiguous range with an upper or lower bound.
- a user can select a bucket and easily create a filter that selects (or excludes) all rows whose value for the field fall within the range.
- the filter expression uses OR. That is, a row matches the expression if it matches any one of the values or ranges.
- a user can also create a filter based on multiple data values in a single row in the data pane.
- the filter expression uses AND. That is, only rows that match all of the specified values match the expression. This can be applied to buckets in the profile pane as well. In this case, a row must match on each of the selected bucket ranges.
- Some implementations also allow creation of a filter based on a plurality of data values that include two or more rows and include two or more columns.
- the expression created is in disjunctive normal form, with each disjunct corresponding to one of the rows with a selected data value.
- Some implementations apply the same technique to range selections in the profile window as well.
- joins As illustrated above with respect to Figures 4A - 4V, a user can create joins. Depending on whether declarative execution is enabled, the join may be pushed to a remote server for execution, as illustrated in Figure 9 below.
- Some implementations provide simplified or condensed versions of flows as nodes and annotations.
- a user can toggle between a full view or a condensed view, or toggle individual nodes to hide or expose the details within the node.
- a single node may include a dozen operations to perform cleanup on certain source files. After several iterations of experimentation with the cleanup steps, they are working fine, and the user does not generally want to see the detail. The detail is still there, but the user is able to hide the clutter by viewing just the condensed version of the node.
- operational nodes that do not fan out are folded together into annotations on the node. Operations such as joins and splits will break the flow with additional nodes.
- the layout for the condensed view is automatic.
- a user can rearrange the nodes in the condensed view.
- Both the profile pane and the data pane provide useful information about the set of rows associated with the currently selected node in the flow pane.
- the profile pane shows the cardinalities for various data values in the data (e.g., a histogram showing how many rows have each data value). The distributions of values are shown for multiple data fields. Because of the amount of data shown in the profile pane, retrieval of the data is usually performed asynchronously.
- a user can click on a data value in the profile pane and see proportional brushing of other items.
- the user interface When a user selects a specific data value, the user interface:
- rows are not displayed in the data pane unless specifically requested by the user.
- the data pane is always automatically populated, with the process proceeding asynchronously.
- Some implementations apply different standards based on the cardinality of the rows for the selected node. For example, some implementations display the rows when the cardinality is below a threshold and either does not display the rows or proceeds asynchronously if the cardinality is above the threshold. Some implementations specify two thresholds, designating a set of rows as small, large, or very large.
- the interface displays the rows for small cardinalities, proceeds to display rows asynchronously for large cardinalities, and does not display the results for very large cardinalities.
- the data pane can only display a small number of rows, which is usually selected by sampling (e.g., every nth row). In some implementations, the data pane implements an infinite scroll to accommodate an unknown amount of data.
- Disclosed data prep applications provide a document model that the User Interface natively reads, modifies, and operates with. This model describes flows to users, while providing a formalism for the UI.
- the model can be translated to Tableau models that use AQL and the Federated Evaluator to run.
- the model also enables reliable caching and re- use of intermediate results.
- the data model includes three sub-models, each of which describes a flow in its appropriate stages of evaluation.
- the first sub-model is a“Loom Doc” 702. (Some implementations refer to the data prep application as“Loom”)
- a Loom doc 702 is the model that describes the flow that a user sees and interacts with directly.
- a Loom doc 702 contains all the information that is needed to perform all of the ETL operations and type checking. Typically, the Loom doc 702 does not include information that is required purely for rendering or editing the flow.
- a Loom doc 702 is constructed as a flow. Each operation has:
- the input operations perform the“Extract” part of ETL. They bind the flow to a data source, and are configured to pull data from that source and expose that data to the flow.
- Input operations include loading a CSV file or connecting to an SQL database.
- a node for an input operation typically has zero inputs and at least one output.
- the output operations provide the“Load” part of ETL. They operate with the side effects of actually updating the downstream data sources with the data stream that come in. These nodes have one input, and no output (there are no“outputs” to subsequent nodes in the flow).
- the container operations group other operations into logical groups. These are used to help make flows easier to document. Container operations are exposed to the user as “Nodes” in the flow pane. Each container node contains other flow elements (e.g., a sequence of regular nodes), as well as fields for documentation. Container nodes can have any number of inputs and any number of outputs.
- a data stream represents the actual rows of data that moves across the flow from one node to another. Logically, these can be viewed as rows, but operationally a data stream can be implemented in any number of ways. For example, some flows are simply compiled down to AQL (Analytical Query Language).
- AQL Analytical Query Language
- the extensible operations are operations that the data prep application does not directly know how to evaluate, so it calls a third-party process or code. These are operations that do not run as part of the federated evaluator.
- the logical model 704 is a model that contains all of the entities, fields, relationships, and constraints. It is built up by running over the flow, and defines the model that is built up at any part in the flow.
- the fields in the logical model are column in the results.
- the entities in the logical model represent tables in the results, although some entities are composed of other entities. For example, a union has an entity that is a result of other entities.
- the constraints in the logical model represent additional constraints, such as filters.
- the relationships in the logical model represent the relationships across entities, providing enough information to combine them.
- the physical model 706 is the third sub-model.
- the physical model includes metadata for caching, including information that identifies whether a flow needs to be re-run, as well as how to directly query the results database for a flow.
- the metadata includes:
- This data is used for optimizing flows as well as enabling faster navigation of the results.
- the physical model includes a reference to the logical model used to create this physical model (e.g. a pointer to a file or a data store).
- the physical model 706 also includes a Tableau Data Source (TDS), which identifies the data source that will be used to evaluate the model. Typically, this is generated from the logical model 704
- the physical model also includes an AQL (Analytical Query Language) query that will be used to extract data from the specified data source.
- AQL Analytical Query Language
- the loom doc 702 is compiled (722) to form the logical model 704, and the logical model 704 is evaluated (724) to form the physical model.
- Figure 7B illustrates a file format 710 that is used by some implementations.
- the file format 710 is used in both local and remote execution. Note that the file format contains both data and flows. In some instances, a flow may create data by doing a copy/paste. In these cases, the data becomes a part of the flow.
- the file format holds a UI state, separate from the underlying flow. Some of the display is saved with the application. Other parts of layout are user specific and are stored outside of the application.
- the file format can be versioned.
- the file format has a multi-document format.
- the file format has three major parts, as illustrated in Figure 7B.
- the file format 710 includes editing info 712. This section is responsible for making the editing experience continue across devices and editing sessions. This section stores any pieces of data that are not needed for evaluating a flow, but are needed to re-construct the UI for the user.
- the editing info 712 include Undo History, which contains a persistent undo buffer that allows a user to undo operations after an editing session has been closed and re-opened.
- the editing info also includes a UI State, such as what panes are visible, x/y coordinates of flow nodes, which are not reflected in how a flow is run. When a user re-opens the UI, the user sees what was there before, making it easier to resume work
- the file format 710 includes a Loom Doc 702, as described above with respect to Figure 7A. This is the only section of the file format that is required. This section contains the flow.
- the file format 710 also includes local data 714, which contains any tables or local data needed to run a flow. This data can be created through user interactions, such as pasting an HTML table into the data prep application, or when a flow uses a local CSV file that needs to get uploaded to a server for evaluation.
- the Evaluation Sub-System is illustrated in Figure 7C.
- the evaluation sub- system provides a reliable way to evaluate a flow.
- the evaluation sub-system also provides an easy way to operate over the results of an earlier run or to layer operations on top of a flow’s operation.
- the evaluation sub-system provides a natural way to re-use the results from one part of the flow when running later parts of the flow.
- the evaluation sub-system also provides a fast way to run against cached results.
- FIG. 7C There are two basic contexts for evaluating a flow, as illustrated in Figure 7C.
- the process evaluates the flow and pours the results into the output nodes. If running in debug mode, the process writes out the results in temporary databases that can be used for navigation, analysis, and running partial flows faster.
- navigation and analysis (730) a user is investigating a dataset. This can include looking at data distributions, looking for dirty data, and so on. In these scenarios, the evaluator generally avoids running the entire flow, and instead runs faster queries directly against the temporary databases created from running the previous the flows previously.
- Some implementations include an Async sub-system, as illustrated in Figure 7D.
- the async sub-system provides non-blocking behavior to the user. If the user is doing a bunch of operations that don’t require getting rows back, the user is not blocked on getting them.
- the async sub-system provides incremental results. Often a user won’t need the full set of data to start validating or trying to understand the results. In these cases, the async sub- system gives the best results as they arrive.
- the async sub-system also provides a reliable “cancel” operation for queries in progress.
- the async model includes four main components:
- a browser layer This layer gets a UUID and an update version from the async tasks it starts. It then uses the UUID for getting updates.
- a REST API This layer starts tasks in a thread-pool. The tasks in the thread- pool update the Status Service as they get updates. When the browser layer wants to know if there are updates, it calls a REST API procedure to get the latest status.
- a federated evaluator • A federated evaluator.
- the AqlApi calls into the federated evaluator, which provides another layer of asynchrony, because it runs as a new process.
- Disclosed implementations enable users to create flows that can be easily refactored. What this means is that users are able to take operations or nodes and easily:
- Implementations provide direct feedback on whether these operations create errors. For example, suppose a user has a flow with ADD COLUMN -> FILTER. The user can drag the FILTER node before the ADD COLUMN node, unless the FILTER uses the column that was added. If the FILTER uses the new column, the interface immediate raises an error, telling the user the problem.
- the system helps by identifying drop targets. For example, if a user selects a node and begins to drag it within the flow pane, some implementations display locations (e.g., by highlighting) where the node can be moved.
- Disclosed data prep applications use a language that has three aspects:
- a data flow language This is how users define a flow’s inputs, transforms, relationships, and outputs. These operations directly change the data model.
- the types in this language are entities (tables) and relationships rather than just individual columns. Users do not see this language directly, but use it indirectly through creating nodes and operations in the UI. Examples include joining tables and removing columns.
- the language describes a flow of operations that logically goes from left to right, as illustrated in Figure 8A. However, because of the way the flow is evaluated, the actual implementation can rearrange the operations for better performance. For example, moving filters to remote databases as the data is extracted can greatly improve overall execution speed.
- the data flow language is the language most people associate with the data prep application because it describes the flow and relationship that directly affect the ETL. This part of the language has two major components: models and nodes/operations. This is different from standard ETL tools. Instead of a flow directly operating on data (e.g. flowing actual rows from a“filter” operation to an“add field” operation) disclosed flows define a logical model that specifies what it wants to create and the physical model defining how it wants to materialize the logical model. This abstraction provides more leeway in terms of optimization.
- Models are the basic nouns. They describe the schema and the relationships of the data that is being operated on. As noted above, there is a logical model and a separate physical model.
- a Logical Model provides the basic“type” for a flow at a given point. It describes the fields, entities, and relationships that describe the data being transformed. This model includes things such as sets and groups. The logical model specifies what is desired, but not any materialization. The core parts of this model are:
- Fields These are the actual fields that will get turned in data fields in the output (or aid calculations that do so). Each field is associated with an entity and an expression. Fields don’t necessarily all need to be visible. There are 3 types of fields: physical fields, computed fields, and temporary fields. Physical fields get materialized into the resulting data set. These can be either proper fields, or calculations. Computed fields are written to the resulting TDS as computed fields, so they will never get materialized. Temporary fields are written to better factor the calculations for a physical field. They are not written out in any way. If a temporary field is referenced by a computed field, the language will issue a warning and treat this field as a computed field.
- Entities These are the objects that describe the namespace for the logical model. Entities are created either by the schema of a table coming in, or can be composed of a collection of entities that are associated together by relationships.
- Relationships These are objects that describe how different entities relate to each other. They can be used to combine multiple Entities into a new composite entity.
- Constraints These describe constraints added to an entity. Constraints include filters that actually limit the results for an entity. Some constraints are enforced.
- Enforced constraints are guaranteed from an upstream source, such as a unique constraint, or not-null constraint. Some constraints are asserted. These are constraints that are believed to be true. Whenever data is found to violate this constraint, the user is notified in some way.
- a flow can include one or more forks in the logical model. Forking a flow uses the same Logical Model for each fork. However, there are new entities under the covers for each side of the fork. These entities basically pass through to the original entities, unless a column gets projected or removed on them.
- Some implementations allow pinning a node or operation.
- the flows describe the logical ordering for a set of operations, but the system is free to optimize the processing by making the physical ordering different.
- a user may want to make sure the logical and physical orderings are exactly the same.
- a user can“pin” a node.
- the system ensures that the operations before the pin happen physically before the operations after the pin. In some cases, this will result in some form of materialization. However, the system streams through this whenever possible.
- the physical model describes a materialization of the logical model at a particular point. Each physical model has a reference back to the logical model that was used to generate it. Physical models are important to caching, incremental flow runs, and load operations.
- a physical model includes a reference to any file that contains results of a flow, which is a unique hash describing the logical model up to this point.
- the physical model also specifies the TDS (Tableau Data Source) and the AQL (Analytical Query Language) generated for a run.
- Nodes and Operations are the basic verbs. Nodes in the model include operations that define how the data is shaped, calculated, and filtered. In order to stay consistent with the UI language, the term“operations” refers to one of the“nodes” in a flow that does something. Nodes are used to refer to containers that contain operations, and map to what a user sees in the flow pane in the UI. Each specialized node/operation has properties associated with it that describe how it will operate. [00268] There are four basic types of nodes: input operations, transform operations, output operations, and container nodes. Input operations create a logical model from some external source. Examples include an operation that imports a CSV. Input operations represent the E in ETL (Extract).
- Transform operations transform a logical model into a new logical model.
- a transform operation takes in a logical model and returns a new logical model.
- Transform nodes represent the T in ETL (Transform).
- An example is a project operation that adds a column to an existing logical model.
- Output operations take in a logical model and materialize it into some other data store. For example, an operation that takes a logical model and materializes its results into a TDE.
- These operations represent the L in ETL (Load).
- Container nodes are the base abstraction around how composition is done across flows, and also provide an abstraction for what should be shown as the nodes are shown in the E ⁇ .
- Operations are atomic actions, each having inputs and outputs, as well as a required set of fields.
- Required fields are fields that are needed by an operation. The required fields can be determined by evaluating the operation with an empty type environment, then gathering any of the fields that are“assumed.”
- Type Environments are the constructs that determine how to look up the types for a given point in a flow.
- Each“edge” in flow graph represents a type environment.
- Type checking is performed in two phases.
- the type environments creation phase the system runs through the flow in the direction of the flow. The system figures out what types are needed by each node, and what type environments they output. If the flow is abstract (e.g., it does not actually connect to any input nodes), the empty type environment is used.
- Type refinement is the second phase. In this phase, the system takes the type environments from the first phase and flow them“backwards” to see if any of the type narrowing that happened in type environment creation created type conflicts. In this phase, the system also creates a set of required fields for the entire sub flow.
- Each operation has a type environment associated with it. This environment contains all the fields that are accessible and their types. As illustrated in Figure 8C, a type environment has five properties. [00272] An environment can be either“Open” or“Closed”. When an environment is Open, it assumes that there may be fields that it does not know about. In this case, any field that is not known will be assumed to be any type. These fields will be added to the AssumedTypes field. When an environment is Closed, it assumes it knows all the fields, so any fields that is not knows is a failure.
- Types member This is a mapping from field names to their types.
- the type may be either another type environment or it can be a Field.
- a field is the most basic type.
- Each field is composed of two parts.
- basicTypes is a set of types that describes the possible set of types for the field. If this set has only one element, then we know what type it has. If the set is empty, then there was a type error. If the set has more than one element, then there are several possible types. The system can resolve and do further type narrowing if needed.
- derivedFrom is a reference to the fields that went into deriving this one.
- Each field in a scope has a potential set of types.
- Each type can be any combination of Boolean, String, Integer, Decimal, Date, DateTime, Double, Geometry, and Duration.
- the AssumedTypes property is a list of the types that were added because they were referenced rather than defined. For example, if there is an expression [A] + [B] that is evaluated in an open type environment, the system assumes that there were two fields: A and B.
- the AssumedTypes property allows the system to keep track of what was added this way. These fields can be rolled up for further type winnowing as well as for being able to determine the required fields for a container.
- the “Previous” type environment property is a reference to the type environment this one was derived from. It is used for the type refinement stages, during the backwards traversal through the flow looking for type inconsistencies.
- Type environments can also be composed. This happens in operations that take multiple inputs. When, a type environment is merged, it will map each type environment to a value in its types collection. Further type resolution is then delegated to the individual type environments. It will then be up to the operator to transform this type environment to the output type environment, often by“flattening” the type environment in some way to create a new type environment that only has fields as types.
- the type environment created by an input node is the schema returned by the data source it is reading.
- this will be the schema of the table, query, stored procedure, or view that it is extracting.
- For a CSV file this will be the schema that is pulled from the file, with whatever types a user has associated with the columns.
- Each column and its type is turned into a field/type mapping.
- the type environment is marked as closed.
- the type environment for a transform node is the environment for its input. If it has multiple inputs, they will be merged to create the type environment for the operation. The output is a single type environment based on the operator.
- the table in Figures 8J-1 to 8J- 3 lists many of the operations.
- a container node may have multiple inputs, so its type environment will be a composite type environment that routes appropriate children type environments to the appropriate output nodes.
- its type environment will be a composite type environment that routes appropriate children type environments to the appropriate output nodes.
- a container node is the only type of node that is able to have more than one output. In this case, it may have multiple output type environments. This should not be confused with branching the output, which can happen with any node. However, in the case of branching an output, each of the output edges has the same type environment.
- the UI and middle tier are able to get at the runtime types. This information is able to flow through the regular callback, as well as being embedded in the types for tempdb (e.g., in case the system is populating from a cached run).
- the UI shows users the more specific known types, but does not type check based on them. This enables creation of OutputNodes that use more specific types, while allowing the rest of the system to use the more simplified types.
- Figures 8D illustrates simple type checking based on a flow with all data types known.
- Figures 8E illustrates a simple type failure with types fully known.
- Figure 8F illustrates simple type environment calculations for a partial flow.
- Figure 8G illustrates types of a packaged-up container node.
- Figure 8H illustrates a more complicated type environment scenario.
- Figure 81 illustrates reusing a more complicated type environment scenario.
- Some implementations infer data types and use the inferred data types for optimizing or validating a data flow. This is particularly useful for text-based data sources such as XLS or CSV files. Based on how a data element is used later in a flow, a data type can sometimes be inferred, and the inferred data type can be used earlier in the flow. In some implementations, a data element received as a text string can be cast as the appropriate data type immediately after retrieval from the data source. In some instances, inferring data types is recursive. That is, by inferring the data type for one data element, the system is able to infer the data types of one or more additional data elements. In some instances, a data type inference is able to rule out one or more data types without determining an exact data type (e.g., determining that a data element is numeric, but not able to determine whether it is an integer or a floating point number).
- Type checking narrows one variable at a time. In the steps above, type checking is applied to only one variable, before re-computing the known variables. This is to be safe in the case there is an overloaded function with multiple signatures, such as Function 1 (string, int) and Function 1 (int, string). Suppose this is called as Functionl([A], [B]). The process determines that the types are A: [String, int] and B: [String sportint] However, it would be invalid for the types to resolve to A: [String] and B: [String], because if A is a String, B needs to be an int. Some implementations handle this type of dependency by re-running the type environment calculation after each type narrowing.
- Some implementations optimize what work to do by only doing work on nodes that actually have a required field that includes the narrowed variable. There is a slight subtlety here, in that narrowing A may end up causing B to get narrowed as well. Take the Function 1 example above. In these cases, the system needs to know when B has changed and check its narrowing as well. [00295] When looking at how operators will act, it is best to think of them in terms of four major properties, identified here as “Is Open”, “Multi -Input”, “Input Type”, and “Resulting Type”.
- An operation is designated as Open when it flows the columns through.
- “filter” is an open operation, because any column that are in the input will also be in the output.
- Group by is not Open, because any column that is not aggregated or grouped on will not be in the resulting type.
- The“Multi -Input” property specifies whether this operation takes multiple input entities. For example, a join is multi-input because it takes two entities and makes them one. A union is another operation that is multi-input.
- The“Input Type” property specifies the type the node requires. For a multi- input operation, this is a composite type where each input contains its own type.
- The“Resulting Type” property specifies the output type that results from this operation.
- a flow is created over time as needs change.
- a flow grows by organic evolution, it can become large and complex.
- a user needs to modify a flow, either to address a changing need, or to reorganize a flow so that it is easier to understand.
- Such refactoring of a flow is difficult or impossible in many ETL tools.
- Implementations here not only enable refactoring, but assist the user in doing so.
- the system can get the RequireFields for any node (or sequence of nodes), and then light up drop targets at any point that has a type environment that can accommodate it.
- Another scenario involves reusing existing nodes in a flow. For example, suppose a user wants to take a string of operations and make a custom node. The custom node operates to“normalize insurance codes”. The user can create a container node with a number of operations in it. The system can then calculate the required fields for it. The user can save the node for future use, either using a save command or dragging the container node to the left- hand pane 312. Now, when a person selects the node from the palette in the left-hand pane, the system lights up drop targets in the flow, and the user can drop the node onto one of the drop targets (e.g., just like the refactoring example above.).
- the system lights up drop targets in the flow, and the user can drop the node onto one of the drop targets (e.g., just like the refactoring example above.).
- ETser Defined Flow Operations can extend a data flow with Input, Output, and Transform operations. These operations can use custom logic or analytics to modify the contents of a row.
- ETsers can build in scripts that do non-data flow operations, such as downloading a file from a share, unzipping a file, running a flow for every file in a directory, and so on.
- Implementations here take an approach that is language agnostic in terms of how people use the provided extensibility.
- a first extension allows users to build custom nodes that fit into a flow. There are two parts to creating an extension node:
- A“ScriptNode” is a node where the user can write script to manipulate rows, and pass them back.
- the system provides API functions. The user can then write a transform (or input or output) node as a script (e.g., in Python or Javascript).
- A“ShellNode” is a node where the user can define an executable program to run, and pipe the rows into the executable. The executable program will then write out the results to stdout, write errors to stderr and exit when it is done.
- Implementations have a flow evaluation process that provides many useful features. These features include:
- the evaluation process works based on the interplay between the logical models and physical models. Any materialized physical model can be the starting point of a flow.
- the language runtime provides the abstractions to define what subsections of the flows to run. In general, the runtime does not determine when to run sub-flows versus full flows. That is determined by other components.
- Figure 8M illustrates that running an entire flow starts with implied physical models at input and output nodes.
- Figure 8N illustrates that running a partial flow materializes a physical model with the results.
- Figure 80 illustrates running part of a flow based on previous results.
- Some implementations are case sensitive with respect to column names, but some implementations are not. Some implementations provide a user configurable parameter to specify whether column names are case sensitive.
- Figures 8P and 8Q illustrate evaluating a flow with a pinned node 860.
- the nodes before the pin are executed first to create user node results 862, and the user node results 862 are used in the latter portion of the flow. Note that pinning does not prevent rearranging execution within each of the portions.
- a pinned node is effectively a logical checkpoint.
- nodes In addition to nodes that are pinned by a user, some nodes are inherently pinned based on the operations they perform. For example, if a node makes a call out to custom code (e.g., a Java process), logical operations cannot be moved across the node. The custom code is a“black box,” so its inputs and outputs must be well-defined. [00321] In some instances, moving the operations around can improve performance, but create a side-effect of reducing consistency. In some cases, a user can use pinning as a way to guarantee consistency, but at the price of performance.
- custom code e.g., a Java process
- a user can edit data values directly in the data grid 315.
- the system infers a general rule based on the user’s edit. For example, a user may add the string“19” to the data value“75” to create“1975.” Based on the data and the user edit, the system may infer a rule that the user wants to fill out the character string to form 4 character years for the two character years that are missing the century.
- the inference is based solely on the change itself (e.g., prepend“19”), but in other instances, the system also bases the inference on the data in column (e.g., that the column has values in the range“74” -“99”).
- the user is prompted to confirm the rule before applying the rule to other data values in the column.
- the user can also choose to apply the same rule to other columns.
- User edits to a data value can include adding to a current data value as just described, removing a portion of a character string, replacing a certain substring with another substring, or any combination of these.
- telephone numbers may be specified in a variety of formats, such as (XXX)YYY-ZZZZ.
- a user may edit one specific data value to remove the parentheses and the dash and add dots to create XXX. YYY.ZZZZ.
- the system can infer the rule based on a single instance of editing a data value and apply the rule to the entire column.
- numeric fields can have rules inferred as well. For example, if a user replaces a negative value with zero, the system may infer that all negative values should be zeroed out.
- a rule is inferred when two or more data values are edited in a single column of the data grid 315 according to a shared rule.
- Figure 9 illustrates how a logical flow 323 can be executed in different ways depending on whether the operations are designated as imperative or declarative.
- this flow there are two input datasets, dataset A 902 and dataset B 904.
- these datasets are retrieved directly from data sources.
- the two datasets 902 and 904 are combined using a join operation 906 to produce an intermediate dataset.
- the flow 323 applies a filter 908, which creates another intermediate dataset, with fewer rows than the first intermediate dataset created by the join operation 906.
- the execution optimizer can reorganize the physical flow.
- the filter can be pushed back to the query that retrieved dataset A 902, thus reducing the amount of data retrieved and processed. This can be particularly useful when dataset A 902 is retrieved from a remote server and/or the filter eliminates a substantial number of rows.
- a user builds and changes a data flow over time, so some implementations provide incremental flow execution. Intermediate results for each node are saved, and recomputed only when necessary.
- a flow hash for a given node is a hash value that identifies all of the operations in the flow up to and including the given node. If any aspect of the flow definition has changed (e.g., adding nodes, removing nodes, or changing the operations at any of the nodes), the hash will be different. Note that the flow hash just tracks the flow definition, and does not look at the underlying data.
- a vector clock tracks versioning of the data used by a node. It is a vector because a given node may use data from multiple sources.
- the data sources include any data source accessed by any node up to and including the given node.
- the vector includes a monotonically increasing version value for each of the data sources. In some cases, the monotonically increasing value is a timestamp from the data source. Note that the value corresponds to the data source, not when the data was processed by any nodes in the flow. In some cases, a data source can provide the monotonically increasing version value (e.g., the data source has edit timestamps).
- the data prep application 250 computes a surrogate value (e.g., when was the query sent to or retrieved from the data source).
- a surrogate value e.g., when was the query sent to or retrieved from the data source.
- the data prep application 250 limits the number of nodes that need to be recomputed.
- Figure 10 illustrates a process of establishing a high water mark for result sets retrieved from multiple asynchronous queries, in accordance with some implementations.
- Each group of four bars represents a point in time, with time increasing in the order Ti, T 2 , T3, and T 4 .
- the four bars in each group represent partial results for four distinct queries that are running asynchronously.
- the dotted line in each group represents what rows of data have been retrieved from a data source for all of the queries. Sometimes the dotted line is referred to as a high water mark.
- the high water mark is typically specified by a unique identifier.
- the unique identifier is a primary key value from the data source. For example, if each of the four queries retrieves data from the same data source, in primary key order, a primary key value can be used as the high water mark.
- the unique identifier is a row number.
- the fourth result set 1008-1 has the least rows of the four result sets 1002-1, 1004-1, 1006-1, and 1008-1, so the high water mark 1010-1 at Ti is determined by the fourth result set 1008-1.
- the second time T2 more results have been received for the second result set 1004-2 and the third result set 1006-2, but the first result set 1002-2 and the fourth result set 1008-2 remain the same. Because of this, the high water mark 1010-2 remains the same as well.
- the first result set 1002-3 has received additional rows of data, but the second result set 1004-3, the third result set 1006-3, and the fourth result set 1008-3 remain the same. Because of this the high water mark 1010-3 remains the same as well.
- the first result set 1002-4, the second result set 1004- 4, and the third result set 1006-4 remain the same, but additional rows are retrieved for the fourth result set 1008-4.
- re-computing the high water mark is triggered when new rows are received for any of the queries. In some implementations, re-computing the high water mark is triggered based on a timer (e.g., once every second). In some implementations that use a timer, a first test determines whether any of the result sets have changed since the most recent update (or test). In some implementations, the timing intervals are non-linear. For example, perform a first test/update in 1 ⁇ 2 of a second, perform a second test/update after another second, perform a third update after two more seconds, and so on.
- FIG. 11 illustrates how a data preparation user interface updates while data is being loaded from a data source, in accordance with some implementations.
- a computer system 200 includes a data prep application 250 and a cache 1112, which stores partial query results.
- the data prep application 250 displays a user interface 100, which allows a user to interact with and modify the data received from a data source stored in a database 240.
- the database 240 may be stored at the computer system 200 or stored remotely (e.g., on a database server).
- the data is retrieved using multiple asynchronous queries 1120, and is received as partial query results 1122 (e.g., in blocks specified by the data prep application 250).
- the initial block for each of the queries is small so that the data can be loaded into the user interface quickly. This allows a user to begin work with the data immediately.
- the block sizes typically increase, such as doubling each time a block of rows is received.
- the data refresh module 1110 updates the user interface 100 as new rows of data arrive.
- the data refresh module 1110 updates the user interface 100 according to the new high water mark. Every place where data is displayed (e.g., in the data value histograms of the profile pane, such as the histogram 1310 in Figure 13), the data is updated. In some cases, the user has taken action to edit the data and/or change parameters about how the data is viewed (e.g., scroll position or object selection), as illustrated in Figures 12 and 13. In these cases, the data refresh module 1110 updates the data according to the data changes and view parameters in order to preserve what the user is seeing (e.g., no wild jumps in the user interface 100).
- Figure 12 illustrates user interactions with partially loaded data in a data preparation user interface and subsequent updates to the user interface as additional data arrives asynchronously, in accordance with some implementations.
- partial results 1122 are retrieved from a database 240 and stored in a cache 11 12.
- the data from the cache updates the user interface 100 at a first time 1200-1. Once some data is visible, the user is able to make changes 1212 to the data, such as filter the data, exclude certain data, brush the data, delete a column, add a new column, rename a column, change the data type of a column, or apply a transformation function to a column. These changes are applied to the data, at a second time 1200-2.
- changes to the data are based on the cache and the current high water mark.
- the changes are also stored as a set of stored operations 1214 (e.g., as part of one or more nodes in a corresponding flow diagram).
- the data refresh module 1110 uses the updated set of rows from the cache (up to the new high water mark), and applies the stored operations 1214 to the retrieved data to update (1216) the user interface 100. In this way, at the third time 1200- 3, the user still sees the changes, and the changes are applied to the new rows of data. In other words, the refreshed data does not revert, undo, or ignore the user’s action 1212.
- Figure 13 is an example of a profile pane for a data preparation user interface, in accordance with some implementations.
- the profile pane includes data value histograms for each of the displayed data fields, such as the histogram 1310 for the field“Day Week” (which identifies the day of week each accident occurred in the Accidents data set).
- Each bar in a data value histogram is a“bin” that corresponds to an individual data value or range of data values.
- each bin typically has a single data value, whereas numeric fields are typically binned by ranges of values.
- the State data field has a bin for each state, including the California bin 1302.
- a user can select the California bin 1302 and filter the display to just rows from the Accidents table where the accident occurred in California (or exclude the rows from California).
- a user can also delete a column or rename a column. For example, a user can select the“Road Fnc” column 1304 and remove it from the display. Alternatively, the user could select a different column name, such as“Road Condition”. In some cases, it also makes sense to change the data type of the selected data field.
- a user can also add a new column, such as adding a new column at the location 1306. When adding a new column, the data for the column is usually expressed as a function of other columns. For example, add a new column that computes a two-character state abbreviation corresponding to the State data value for each row.
- a user can also change data values for an existing column.
- the data values 1312 for day of week have been encoded as the numbers 1 - 7 in this data set.
- it would be useful to convert these to names for the days of the week e.g., replace 1 with“Monday”, replace 2 with“Tuesday”, and so on.
- the user can make these edits to the data directly in the profile pane of the data prep user interface 100.
- Disclosed implementations provide the following benefits:
- join summary area of the Join Node allows the user to select join parts while data is loading.
- a user loads a table T from SQL Server.
- the system continues loading T, and also starts computing the T- ⁇ c ⁇ ; the user sees the metadata for this node show up, and can take action (e.g., remove column d).
- the system continues loading T, but abandons computation for T- ⁇ c ⁇ , deciding instead to directly compute T- ⁇ c,d ⁇ from T. Alternatively, the system continues to load T and compute T- ⁇ c ⁇ , and decides to compute T- ⁇ c,d ⁇ on top of the latter.
- a user can take other actions in the user interface that will be preserved as more data arrives. For example, user actions to select, scroll, or change view state are preserved. Vertical and horizontal scrolling apply to both the profile pane and the data pane. If a user has selected a specific object in any of the panes, the selection is retained when new data arrives. The view state is maintained, including brushing and filters.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980080277.0A CN113168413B (zh) | 2018-10-09 | 2019-10-01 | 用于交互式数据准备应用的多个数据集的相关增量加载 |
BR112021006722-1A BR112021006722A2 (pt) | 2018-10-09 | 2019-10-01 | carregamento incremental correlacionado de múltiplos conjuntos de dados para um aplicativo de preparação de dados interativo |
EP19791392.4A EP3864521A1 (en) | 2018-10-09 | 2019-10-01 | Correlated incremental loading of multiple data sets for an interactive data prep application |
CA3115220A CA3115220C (en) | 2018-10-09 | 2019-10-01 | Correlated incremental loading of multiple data sets for an interactive data prep application |
JP2021518509A JP7199522B2 (ja) | 2018-10-09 | 2019-10-01 | インタラクティブなデータプレップアプリケーションのための複数のデータセットの相関増分ロード |
AU2019356745A AU2019356745B2 (en) | 2018-10-09 | 2019-10-01 | Correlated incremental loading of multiple data sets for an interactive data prep application |
AU2022202376A AU2022202376B2 (en) | 2018-10-09 | 2022-04-11 | Correlated incremental loading of multiple data sets for an interactive data prep application |
JP2022203797A JP7304480B2 (ja) | 2018-10-09 | 2022-12-20 | インタラクティブなデータプレップアプリケーションのための複数のデータセットの相関増分ロード |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/155,818 | 2018-10-09 | ||
US16/155,818 US10885057B2 (en) | 2016-11-07 | 2018-10-09 | Correlated incremental loading of multiple data sets for an interactive data prep application |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020076546A1 true WO2020076546A1 (en) | 2020-04-16 |
Family
ID=68318939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2019/053935 WO2020076546A1 (en) | 2018-10-09 | 2019-10-01 | Correlated incremental loading of multiple data sets for an interactive data prep application |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP3864521A1 (zh) |
JP (2) | JP7199522B2 (zh) |
CN (1) | CN113168413B (zh) |
AU (2) | AU2019356745B2 (zh) |
BR (1) | BR112021006722A2 (zh) |
CA (1) | CA3115220C (zh) |
WO (1) | WO2020076546A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666740A (zh) * | 2020-06-22 | 2020-09-15 | 深圳壹账通智能科技有限公司 | 流程图生成方法、装置、计算机设备和存储介质 |
WO2024211674A1 (en) * | 2023-04-07 | 2024-10-10 | Ab Initio Technology Llc | On-demand integration of records with data catalog identifiers |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI772233B (zh) * | 2021-11-29 | 2022-07-21 | 大陸商常州欣盛半導體技術股份有限公司 | Cof測試資料的自動整合方法 |
US12093249B2 (en) * | 2022-08-26 | 2024-09-17 | Oracle International Corporation | Dynamic inclusion of metadata configurations into a logical model |
CN117056359B (zh) * | 2023-10-09 | 2024-01-09 | 宁波银行股份有限公司 | 一种表格重建方法、装置、电子设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140043325A1 (en) * | 2012-08-10 | 2014-02-13 | Microsoft Corporation | Facetted browsing |
US20170032026A1 (en) * | 2011-11-04 | 2017-02-02 | BigML, Inc. | Interactive visualization of big data sets and models including textual data |
US20170212944A1 (en) * | 2016-01-26 | 2017-07-27 | Socrata, Inc. | Automated computer visualization and interaction with big data |
US20180129374A1 (en) * | 2016-11-07 | 2018-05-10 | Tableau Software, Inc. | Generating and Applying Data Transformations in a Data Import Engine |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0887433A (ja) * | 1994-09-20 | 1996-04-02 | Matsushita Electric Ind Co Ltd | ファイルシステムのブロック管理システム |
US20030220928A1 (en) * | 2002-05-21 | 2003-11-27 | Patrick Durand | Method for organizing and querying a genomic and proteomic databases |
US8069188B2 (en) * | 2007-05-07 | 2011-11-29 | Applied Technical Systems, Inc. | Database system storing a data structure that includes data nodes connected by context nodes and related method |
CN101626313A (zh) * | 2009-08-10 | 2010-01-13 | 中兴通讯股份有限公司 | 网管系统客户端性能数据显示方法和网管系统客户端 |
JP2011138382A (ja) * | 2009-12-28 | 2011-07-14 | Sharp Corp | 画像処理装置、画像処理方法、プログラム、及び記録媒体 |
CN101916254B (zh) * | 2010-06-29 | 2016-07-06 | 用友软件股份有限公司 | 表单统计方法和装置 |
CN104750727B (zh) * | 2013-12-30 | 2019-03-26 | 沈阳亿阳计算机技术有限责任公司 | 一种列式内存存储查询装置及列式内存存储查询方法 |
CN105512139B (zh) * | 2014-09-26 | 2019-11-05 | 阿里巴巴集团控股有限公司 | 数据可视化的实现方法及装置 |
US10409802B2 (en) * | 2015-06-12 | 2019-09-10 | Ab Initio Technology Llc | Data quality analysis |
-
2019
- 2019-10-01 CA CA3115220A patent/CA3115220C/en active Active
- 2019-10-01 BR BR112021006722-1A patent/BR112021006722A2/pt unknown
- 2019-10-01 JP JP2021518509A patent/JP7199522B2/ja active Active
- 2019-10-01 CN CN201980080277.0A patent/CN113168413B/zh active Active
- 2019-10-01 WO PCT/US2019/053935 patent/WO2020076546A1/en unknown
- 2019-10-01 AU AU2019356745A patent/AU2019356745B2/en active Active
- 2019-10-01 EP EP19791392.4A patent/EP3864521A1/en active Pending
-
2022
- 2022-04-11 AU AU2022202376A patent/AU2022202376B2/en not_active Ceased
- 2022-12-20 JP JP2022203797A patent/JP7304480B2/ja active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170032026A1 (en) * | 2011-11-04 | 2017-02-02 | BigML, Inc. | Interactive visualization of big data sets and models including textual data |
US20140043325A1 (en) * | 2012-08-10 | 2014-02-13 | Microsoft Corporation | Facetted browsing |
US20170212944A1 (en) * | 2016-01-26 | 2017-07-27 | Socrata, Inc. | Automated computer visualization and interaction with big data |
US20180129374A1 (en) * | 2016-11-07 | 2018-05-10 | Tableau Software, Inc. | Generating and Applying Data Transformations in a Data Import Engine |
Non-Patent Citations (2)
Title |
---|
ANONYMOUS: "Cursor (databases) - Wikipedia, the free encyclopedia", 2 December 2012 (2012-12-02), XP055222764, Retrieved from the Internet <URL:https://en.wikipedia.org/w/index.php?title=Cursor_(databases)&oldid=526008371> [retrieved on 20151021] * |
See also references of EP3864521A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666740A (zh) * | 2020-06-22 | 2020-09-15 | 深圳壹账通智能科技有限公司 | 流程图生成方法、装置、计算机设备和存储介质 |
WO2024211674A1 (en) * | 2023-04-07 | 2024-10-10 | Ab Initio Technology Llc | On-demand integration of records with data catalog identifiers |
Also Published As
Publication number | Publication date |
---|---|
CA3115220A1 (en) | 2020-04-16 |
CN113168413B (zh) | 2022-07-01 |
AU2019356745B2 (en) | 2022-01-13 |
JP2023040041A (ja) | 2023-03-22 |
JP7304480B2 (ja) | 2023-07-06 |
AU2019356745A1 (en) | 2021-05-13 |
CN113168413A (zh) | 2021-07-23 |
AU2022202376B2 (en) | 2022-06-09 |
JP2022504205A (ja) | 2022-01-13 |
JP7199522B2 (ja) | 2023-01-05 |
AU2022202376A1 (en) | 2022-05-05 |
CA3115220C (en) | 2023-07-18 |
BR112021006722A2 (pt) | 2021-07-27 |
EP3864521A1 (en) | 2021-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2022203666B2 (en) | Generating and applying data transformations in a data import engine | |
US11188556B2 (en) | Correlated incremental loading of multiple data sets for an interactive data prep application | |
US10719528B2 (en) | Data preparation with shared data flows | |
US11243870B2 (en) | Resolution of data flow errors using the lineage of detected error conditions | |
AU2022202376B2 (en) | Correlated incremental loading of multiple data sets for an interactive data prep application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19791392 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3115220 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2021518509 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021006722 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2019356745 Country of ref document: AU Date of ref document: 20191001 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019791392 Country of ref document: EP Effective date: 20210510 |
|
ENP | Entry into the national phase |
Ref document number: 112021006722 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210408 |