US20140317043A1 - Map Intuition System and Method - Google Patents

Map Intuition System and Method Download PDF

Info

Publication number
US20140317043A1
US20140317043A1 US14/321,314 US201414321314A US2014317043A1 US 20140317043 A1 US20140317043 A1 US 20140317043A1 US 201414321314 A US201414321314 A US 201414321314A US 2014317043 A1 US2014317043 A1 US 2014317043A1
Authority
US
United States
Prior art keywords
plurality
data
transformation
output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/321,314
Inventor
Walter Hughes Lindsay
Joel Timothy Shellman
Original Assignee
Walter Hughes Lindsay
Joel Timothy Shellman
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US33319210P priority Critical
Priority to US13/104,852 priority patent/US8850071B2/en
Application filed by Walter Hughes Lindsay, Joel Timothy Shellman filed Critical Walter Hughes Lindsay
Priority to US14/321,314 priority patent/US20140317043A1/en
Publication of US20140317043A1 publication Critical patent/US20140317043A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIAISON TECHNOLOGIES, INC.
Assigned to LIAISON TECHNOLOGIES, INC. reassignment LIAISON TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK CORRECTIVE ASSIGNMENT TO CORRECT THE LISTED PATENTS,NAMELY 5 NOS. LISTED AS PATENT NOS.(9590916,9344182,9650219,9588270,9294701),SHOULD BE LISTED AS APPLICATION NOS. PREVIOUSLY RECORDED ON REEL 047950 FRAME 0910. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST. Assignors: LIAISON TECHNOLOGIES, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems using knowledge-based models
    • G06N5/04Inference methods or devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

A map intuition system and method that involves machine learning techniques to analyze data sets and identify mappings and transformation rules as well as machine-human interactions to leverage human intuition and intelligence to rapidly complete a map.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of co-pending U.S patent application Ser. No. 13/104,852 (the '852 application) entitled “MAP INTUITION SYSTEM AND METHOD”, which was filed on May 10, 201 land which claims the benefit of the filing date of U.S. provisional patent application No. 61/333,192 entitled “MAP INTUITION SYSTEM AND METHOD”, which was filed on May 10, 2010, by the same inventors of this application. Both the utility application and the provisional application are hereby incorporated by reference as if fully set forth herein.
  • FIELD OF THE INVENTION
  • Presented herein is a map intuition system and method. More specifically, a method and system for intuitively creating data transformation is presented.
  • BACKGROUND OF THE INVENTION
  • Moving electronic data from one point to another in a computer network is ubiquitous. As data is moved, it often needs to be converted from one format to another.
  • For instance, business partners may send EDI data to each other, and the back-end systems which receive and process the data use other data formats, necessitating that the data be transformed from the EDI format into XML, a COBOL Copybook, or some other format.
  • Multiple technologies such as XSLT and Java programs exist for performing data conversion. When a company needs or decides to change the data conversion technology, typically, the data conversion itself needs to be re-implemented. That is, the data formats for the input and output, and the mapping between the two, typically needs to be re-created and tested. This is often a time-consuming and expensive task.
  • Accordingly, there is a continuing need for improved transformation technologies.
  • BRIEF SUMMARY OF THE INVENTION
  • Presented herein is a Map Intuition System and Method that involves machine learning techniques to analyze data sets and identify mappings and transformation rules as well as machine-human interactions to leverage human intuition and intelligence to rapidly complete a map.
  • In one aspect, the system and method are based on an extending-the-human metaphor, as opposed to a replace-the-human metaphor. Thus, machine-human interactions form the basis of the system and method. One of the interactions is to trigger automatic analysis of data sets to identify mappings and rules. But the primary interactions begin once that automatic analysis of the data completes.
  • Other aspects and embodiments of the map intuition system and method are described herein. This description is meant to fully describe the map intuition system and method, but not limit its design, function, or application.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other features of the preferred embodiments of the present invention will become more apparent in the detailed description in which reference is made to the appended drawings wherein:
  • FIG. 1 is an exemplified flowchart of one example of a map intuition system and method;
  • FIG. 2 is an exemplified flowchart of one example of a graphical user interface for a map intuition system and method;
  • FIG. 3 is an exemplified flowchart of one example of a map intuition system and method;
  • FIG. 4 is a graphical representation of a sample heat map for a map intuition system;
  • FIG. 5 is a graphical representation of a computer screenshot showing an example graphical user interface showing loading data samples into a map intuition system;
  • FIG. 6 is a graphical representation of a computer screenshot showing an example graphical user interface showing mapping relationships between source data format definitions and target data format definitions;
  • FIG. 7 is a graphical representation of a computer screenshot showing an example graphical user interface showing a map with field and group constructs being related between source and target;
  • FIG. 8 is an exemplified flowchart of an inference engine for use in a map intuition system, illustrating the system comparing input and output values of input data samples with an output data sample;
  • FIG. 9 is an exemplified flowchart of an inference engine for use in a map intuition system, illustrating the system identifying the strength of correspondences in values from one region of an input data sample with a region of an output data sample;
  • FIG. 10 is an exemplified flowchart of an inference engine for use in a map intuition system, illustrating the system using inference engine information from individual data samples to identify relationships between the input and output data format definitions;
  • FIG. 11 is a graphical representation of one aspect of a GUI for a map intuition system that uses inference engine information to focus a user's attention and organize the user's work based on a variety of information; and
  • FIG. 12 is a graphical representation of a computer screenshot showing an example graphical user interface for allowing a user to examine and complete the logic needed to convert data from the input data format definition to the output data format definition.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present systems and apparatuses and methods are understood more readily by reference to the following detailed description, examples, drawing, and claims, and their previous and following descriptions. However, before the present devices, systems, and/or methods are disclosed and described, it is to be understood that this invention is not limited to the specific devices, systems, and/or methods disclosed unless otherwise specified, as such can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting.
  • The following description of the invention is provided as an enabling teaching of the invention in its best, currently known embodiment. To this end, those skilled in the relevant art will recognize and appreciate that many changes can be made to the various aspects of the invention described herein, while still obtaining the beneficial results of the present invention. It will also be apparent that some of the desired benefits of the present invention can be obtained by selecting some of the features of the present invention without utilizing other features. Accordingly, those who work in the art will recognize that many modifications and adaptations to the present invention are possible and can even be desirable in certain circumstances and are a part of the present invention. Thus, the following description is provided as illustrative of the principles of the present invention and not in limitation thereof.
  • As used throughout, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a data set” can include two or more such data sets unless the context indicates otherwise.
  • Ranges can be expressed herein as from “about” one particular value, and/or to “about” another particular value. When such a range is expressed, another aspect includes from the one particular value and/or to the other particular value. Similarly, when values are expressed as approximations, by use of the antecedent “about,” it will be understood that the particular value forms another aspect. It will be further understood that the endpoints of each of the ranges are significant both in relation to the other endpoint, and independently of the other endpoint.
  • As used herein, the terms “optional” or “optionally” mean that the subsequently described event or circumstance may or may not occur, and that the description includes instances where said event or circumstance occurs and instances where it does not.
  • Presented herein is a map intuition system and method that involves machine learning techniques to analyze data sets and identify mappings and transformation rules as well as machine-human interactions to leverage human intuition and intelligence to rapidly complete a map.
  • In one aspect, the system and method are based on an extending-the-human metaphor, as opposed to a replace-the-human metaphor. Thus, machine-human to interactions form the basis of the system and method. One of the interactions is to trigger automatic analysis of data sets to identify mappings and rules. But the primary interactions begin once that automatic analysis of the data completes.
  • In one exemplified aspect, the method comprises at least one data flow involving a data transformation MI. The data inputs can come from a variety of sources and the data outputs can be sent to a variety of destinations. The data flow can move different kinds of data and initiate a plurality of transformations. In one aspect, the data flow comprises a plurality of data flows within a network.
  • The method comprises capturing one or more inputs feeding into the data transformation Ml and the corresponding output for each input. As one skilled in the art can appreciate, an input can be one or more files. In another aspect, the method comprises using user interface to define the data format of the captured samples, both input and output.
  • In another exemplified aspect, the captured samples are then fed into an inference engine, which produces candidate transformation information. The user then reviews the candidate transformation information to complete the definition of the transformation M2 which generates the same output as does the transformation M1 from the original data flow. This process can be repeated for additional data samples. Once definition of the transformation M2 is complete, the user can save the definition and can also deploy it into a new data flow.
  • In one aspect, a graphical user interface (“GUI”) is provided for the user to review the candidate transformation information and complete the definition of the transformation. In another aspect, the steps the user takes are the following: 1) specifying the selection criteria for the candidates; 2) selecting items from the filtered list; 3) promoting the selected items to be part of the definition of the transformation; 4) running a test to identify the success of the definition of the transformation; 5) de-selecting or modifying parts of the definition of the transformation, after which the user can return to step 4, if necessary; and 6) returning to step 1 until the user is substantially satisfied. The system takes advantage of previously-known information about data format definitions, including previous transformations involving the input or output data format definition, as well as other information, to aid a user in completion of the transformation when the inference engine could not identify any helpful correlation between the input and the output. For example and not meant to be limiting, if a lookup from an external table is needed to convert values from one scheme to another (e.g. to convert between UPC codes and a vendor's part numbers), the inference engine will not detect a correlation. However, there is a correlation, and by having multiple kinds of information available, the user can identify patterns the inference engine could not identify. Using the GUI to display the patterns from which to choose, helps the user complete the transformation definition.
  • In one aspect, the GUI comprises a “heat map.” This heat map can display the entire source data format definition(s) on one axis and the target data format definition(s) on another axis. Displaying recursive or large interfaces in a manageable form as an axis may require pruning irrelevant data, or otherwise restricting the amount of the data format definition represented on the axis at one time. In one exemplified aspect, the heat map can color or otherwise highlight the intersection between source and target definitions, visually highlighting the goodness-of-fit data returned from automatic data analysis. This allows the human eye to rapidly identify patterns in the candidate mappings. For example and not meant to be limiting, if a source data format is mapped to an almost identical target data format, the automatic analysis might generate a diagonal line in the heat map (other information might also display, but in most cases the human eye an identify such a line amidst the noise of other candidate mapping information).
  • Candidate mappings that strongly correspond with the majority of the data sets can appear more visually prominently in the heat map than weaker candidate mappings. For instance, by varying the brightness of portions of the map, the strength of candidate mappings can be made clear. In other situations, such as for color blind individuals, a threshold can be specified, and all candidates which are sufficiently strong (or weak) can be displayed while other candidates are hidden. Other mechanisms can also be used to filter out unwanted information to allow the human to more rapidly comprehend or interact with the information in the heat map.
  • The data from the automatic data analysis (e.g. candidate mappings and transformation rules) can be combined with additional information in the same heat map, or displayed alongside the heat map for other data. For instance, if a mapping specification (which might not reflect the actual Transformation in use in the data flow) is available and is loaded or entered into the system, that information can be displayed in one color in the heat map, the information from automatic data analysis in another color, and the overlap could appear as a combination of the two colors. Of course, it is contemplated that other methods of highlighting can be employed other than color.
  • In addition, one or both axes of a heat map can be based on actual sample data (such as an aggregate of multiple data samples overlaid on the same axis) instead of being based on the data format. For instance, if a message format is very large or is recursive, but the message size in the data flow is more moderate, this would allow the heat map to be more manageable.
  • In addition to the heat map metaphor, the source data format can be represented on the left of the screen and the target on the right, and lines or other techniques can be used to display the mapping information. Data mapping tools today often use a tree metaphor for the data format display.
  • In such a display, the results of automatic data set analysis can also be displayed as mapping lines or via other visual techniques between the source and target data formats. As with heat maps, an aggregate representation of data samples for the source or target can be used. The candidate mapping and rules and other information from the automatic data set analysis can be combined with other data, as with heat maps.
  • The results of automatic data set analysis can also be displayed as a list. Displaying the information as a list can simplify a bottom-up approach of looking at each piece of information individually, accepting or rejecting the candidate, and then proceeding to the next piece of information. The user can sort the list, and the list of candidates can be filtered and prioritized with assistance from other sources of data. In addition, the divergences between the output of a transform and the expected output can be handled as a list. As can be appreciated, one advantage of a list is that the user gets a sense of how much work is left to do.
  • As discussed herein above, the map completion GUI is a computer user interface for allowing humans to interact with source and target data formats, the results of automatic data set analysis, and other information such as mapping specification data, and to produce a working set of mapping and transformation rules.
  • Using the heat map approach, for instance, a user can select a region of the heat map and expand it, allowing more detail about that region of the heat map to appear since that region of the heat map appears larger on the screen (i.e. “zoom in”). At sufficient magnification, details of the source or target data and/or data format appear, details of candidate mapping or transformation rules, etc. appear as text or graphically. In zooming back out, the level of detail reduces. A user might also focus on data by selection portions of the source or target data format or data samples and filtering out other portions.
  • A user can pick portions of the heat map (e.g. select a region or a single intersection or a set of intersections), and perform operations on those portions. For instance, a user might promote candidate mappings or transformation rules to be hypothesis mappings or transformation rules and run a test based on the available input data samples to see if those mappings and transformation rules produce the desired outputs. Information about data sets for which the mappings do or do not produce the desired outputs can be displayed as details in the heat map. Hypothesis mappings can be promoted to accepted mappings, or demoted back to candidate mappings.
  • In addition, a user can navigate from the heat map to other portions or representations of the overall GUI. For instance, a user could navigate from a portion of a heat map or the axes of a heat map to the relevant portion of the display with the source on the left and the target on the right with optional mapping lines.
  • In another aspect, the heat map can be used to display additional details in another display. For instance, a user could display all field values corresponding to a field in the source or target data format. In yet another aspect, the results of testing operations can also be merged with other data of a heat map. For instance, mapping information (e.g. candidate, promoted, etc.) and the success or failure of a test could be overlaid so the human can see what regions of the map are not producing the desired output, and in many cases could at a glance identify alternative mappings which might better produce the desired results, or might identify transformation logic which could help produce the desired output.
  • The results of different stages of the same information can also be compared in a heat map. For instance, the information of two or more different test runs could be merged or contrasted. The definitions of two or more versions of a transformation, or of different but similar transformations, could be simultaneously displayed or contrasted.
  • Similarly, visual displays with the source data format or data on the left and target data format or data on the right can make apparent details which might be obscured in a heat map. For instance, to compare in detail the results of a test run, such a display allows the human to see the output of a test and compare it with the desired output, and to see it in a form closer to the actual syntax of the data than is possible with a heat map.
  • In these and similar visual metaphors, the end goal is to produce a Transformation which generates substantially the same outputs as an existing data flow, or which produces the outputs which have been designed as test cases for creating a new Transformation from sample data.
  • As discussed herein above, data samples that are captured are fed into an inference engine, which produces candidate transformation information. In one aspect, the inference engine identifies value spaces relevant to the field data type and value in the field for each input and output field value in the data set, and represents the value in those value spaces.
  • In one aspect, the inference engine first completes a field to field comparison and ranking. This is completed by comparing each input field value with each output field value, and identifying the degree of correspondence between that pair. The degree of correspondence can be represented numerically. In another aspect, this comparison and identification of degree is calculated by determining one or more of the following: 1) whether the output field is equivalent to the input field value; 2) whether the output field value is a subset of the input field value; 3) whether the output field value is able to be partially constructed from the input field value; and 4) how much of the output can be constructed.
  • The inference engine can then cluster the source and target field matches. For example, for each output field value Vo in the data set, the engine will identify the set No of other output field values within some “distance” D in the document from Vo. Then, for the items in No, it identifies the set S of input items related to No. For each item Vs in S, it then identifies the set Ni of the other input field values within some “distance” in the document from Vs. In one aspect, the inference engine will then rank the size, quality of match, and distinctiveness of the matches of the sets Ni compared to Ni and No, and if a threshold is passed for some Ni, it will increase the ranking of the degree of correspondence between items in Ni to items in No.
  • The inference engine, in one aspect, will also merge the rankings from the data level to the data definition level. For the target and source data definitions which describes the data in the data sets, the inference engine will first examine the target data format definition and perform the following steps for each group or field node No in the target data definition which has matching data in at least one of the data sets. In one aspect, the inference engine examines the target data format definition by performing a post-order traversal. Then, the inference engine will identify the set Co of correlations directly involving No, all descendants in the data format definition of No, or all descendants Do of an ancestor of No (such that Do can be reached from an ancestor without passing through a path step with maximum cardinality greater than 1). In one aspect, the inference engine will then perform a post-order traversal of the source data format definition and perform the following steps for each group or field node Ni in the source data definition which appears in Co. Then, the inference engine will identify the set Ci of correlations directly involving Ni, all descendants in the data format definition of Ni, or all descendants Di of an ancestor of Ni (such that Di can be reached from an ancestor without passing through a path step with maximum cardinality greater than 1). Then, the inference engine evaluates the distinctiveness and quality of matches between Co and Ci, and if strongly correlated, a numerical rating is identified for the correlation and the correlation and its rating are recorded as part of the output of the inference engine.
  • It is contemplated that, at times, the relationship for field values in input and output data samples may not be apparent to the inference engine. As such, the user of the map intuition system needs to substantially replicate the previous behavior without the assistance of the inference engine. In this aspect, the map intuition system can make use of locality. For instance, if the input and output data format are organized according to similar principles, and the input field or group Vi is related to output field or group Vo, then, it stands that neighbors of Vi will more likely be related to neighbors of Vo. The GUI will allow the user to focus on the nearby relations of a target for which the inference engine has not identified a mapping relationship.
  • In this aspect, the map intuition system can also make use of previous uses of the source and target. For instance, if the data format has been previously mapped, the mechanisms used to populate the missing output field may be similar to the mechanisms uses in previous mappings (transformations). The map intuition system will let the user rapidly search previous mappings of the data format.
  • Additionally, the map intuition system can comprise a library of common functions. In this aspect, the map intuition system will allow users to register conversion functions, and will allow the user to ask the system to search among the conversion functions for one which will convert from the input to the output. For example, if the input value needs to be looked up in a conversion table, and the output value can be read from the conversion table, the system will identify use of the conversion table as a candidate. In one aspect, the user can trigger this search while evaluating the results of the inference engine.
  • Although several embodiments of the invention have been disclosed in the foregoing specification, it is understood by those skilled in the art that many modifications and other embodiments of the invention will come to mind to which the invention pertains, having the benefit of the teaching presented in the foregoing description and associated drawings. It is thus understood that the invention is not limited to the specific embodiments disclosed herein above, and that many modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although specific terms are employed herein, as well as in the claims which follow, they are used only in a generic and descriptive sense, and not for the purposes of limiting the described invention, nor the claims which follow.

Claims (15)

1. A method for defining a data transformation, the method comprising:
a processor receiving a plurality of inputs for transformation, said plurality of inputs having a first format;
said processor transforming said plurality of inputs into a plurality of outputs, said plurality of outputs having another format;
said processor capturing at least one of said plurality of inputs prior to said transformation and said processor capturing at least one of said plurality of outputs subsequent to said transformation;
said processor providing said captured at least one input and at least one output to an inference engine;
said inference engine converting said captured at least one input and at least one output into candidate transformation information;
said processor receiving input through a user interface identifying a different transformation for transforming said plurality of inputs into said plurality of outputs, said identification being based at least in part on said candidate transformation information.
2. The method according to claim 1, wherein said inference engine ranks at least a portion of the candidate transformation information from lower to higher and visually highlights higher ranking candidate transformation information.
3. The method according to claim 1, wherein each input and each output includes at least one file.
4. The method according to claim 3, wherein each at least one file comprises a plurality of data fields, wherein each data field has a field value and a field type.
5. The method according to claim 4, wherein said method is iterative.
6. The method according to claim 1, wherein said candidate transformation information includes a list of candidates.
7. The method according to claim 6, wherein said list of candidates includes a plurality of lists of candidates.
8. The method according to claim 6 wherein said identifying a different transformation includes:
specifying a selection criteria for said candidates;
selecting a candidate from said list based at least in part on said selection criteria;
comparing an output from said different method of transforming said plurality of inputs into said plurality of outputs to a corresponding captured output sample to produce a relative difference;
determining if the relative difference is within a predetermined criteria; and, when the relative difference is not within the predetermined criteria, modifying the different method of transforming and returning to the comparing step.
9. The method according to claim 4, further including said inference engine:
identifying value spaces relevant to the field type and field value of said inputs to the transformation and outputs from the different transformation;
comparing field values of said inputs to the transformation with field values of the outputs from the different transformation to provide a numerical degree of correspondence for each; and
visually highlighting candidates with a numerical degree of correspondence greater than a threshold.
10. An apparatus for defining a data transformation from a plurality of inputs having an input format to a plurality of corresponding outputs having an output format, the apparatus comprising:
a non-transitory machine-readable medium; and
a plurality of instructions in the machine-readable medium which, when executed by a processing machine, enable the processing machine to perform operations comprising:
transforming said plurality of inputs into said plurality of outputs;
capturing at least one of said plurality of inputs prior to said transforming and capturing at least one of said plurality of outputs subsequent to said transforming;
providing an inference engine configured to receive said at least one captured input and captured output and convert said at least one captured input and captured output into candidate transformation information; and
providing a user interface having features that enable review of said candidate transformation information, and identification of a different transformation of said plurality of inputs to said plurality of outputs based at least in part on said review of said candidate transformation information.
11. The apparatus according to claim 10, further comprising:
said user interface providing a search function to permit the user to search at least one previous transformation of said input data.
12. The apparatus according to claim 10, further comprising performing a plurality of transformations using said different transformation.
13. The apparatus according to claim 10, further comprising said user interface displaying a library of common functions for providing a user with an ability to search for common data transformations.
14. An inference engine method for ranking and highlighting candidates in a map intuition system, comprising:
receiving at a processor a source data set having data with input field values, each source data set having a source data set definition describing the data in the source data set;
receiving at the processor a target data set derived at least in part from said source data set, having data with output field values Vo, each target data set having a target data set definition describing the data in the target data set; and
said processor comparing each input field value with each output field value, identifying a degree of correspondence between each pair of values, ranking the source and output field values into clusters, and merging the rankings from a data level to a data definition level.
15. The inference engine method of claim 14, wherein the step of comparing each input field value with each output field value comprises:
determining whether the output field value is equivalent to the input field value;
determining whether the output field value is a subset of the input field value;
determining whether the output field value is able to be partially constructed from the input field value; and
determining how much of the output field value can be constructed.
US14/321,314 2010-05-10 2014-07-01 Map Intuition System and Method Abandoned US20140317043A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US33319210P true 2010-05-10 2010-05-10
US13/104,852 US8850071B2 (en) 2010-05-10 2011-05-10 Map intuition system and method
US14/321,314 US20140317043A1 (en) 2010-05-10 2014-07-01 Map Intuition System and Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/321,314 US20140317043A1 (en) 2010-05-10 2014-07-01 Map Intuition System and Method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/104,852 Continuation US8850071B2 (en) 2010-05-10 2011-05-10 Map intuition system and method

Publications (1)

Publication Number Publication Date
US20140317043A1 true US20140317043A1 (en) 2014-10-23

Family

ID=45494486

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/104,852 Active 2032-07-25 US8850071B2 (en) 2010-05-10 2011-05-10 Map intuition system and method
US14/321,314 Abandoned US20140317043A1 (en) 2010-05-10 2014-07-01 Map Intuition System and Method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/104,852 Active 2032-07-25 US8850071B2 (en) 2010-05-10 2011-05-10 Map intuition system and method

Country Status (1)

Country Link
US (2) US8850071B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8806204B2 (en) 2011-06-20 2014-08-12 Liaison Technologies, Inc. Systems and methods for maintaining data security across multiple active domains
US9437022B2 (en) * 2014-01-27 2016-09-06 Splunk Inc. Time-based visualization of the number of events having various values for a field

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040117151A1 (en) * 2002-12-14 2004-06-17 Perspicuous Insights Llc Interactive diagnostic system and method driven by expert system
US20040250133A1 (en) * 2001-09-04 2004-12-09 Lim Keng Leng Albert Computer security event management system
US20070022194A1 (en) * 2003-09-25 2007-01-25 Brown David W Database event driven motion systems
US20090259629A1 (en) * 2008-04-15 2009-10-15 Yahoo! Inc. Abbreviation handling in web search
US20090273340A1 (en) * 2008-05-01 2009-11-05 Cory James Stephanson Self-calibrating magnetic field monitor
US20100274795A1 (en) * 2009-04-22 2010-10-28 Yahoo! Inc. Method and system for implementing a composite database

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192068B1 (en) * 1996-10-03 2001-02-20 Wi-Lan Inc. Multicode spread spectrum communications system
US6226408B1 (en) * 1999-01-29 2001-05-01 Hnc Software, Inc. Unsupervised identification of nonlinear data cluster in multidimensional data
US6711585B1 (en) * 1999-06-15 2004-03-23 Kanisa Inc. System and method for implementing a knowledge management system
JP3677192B2 (en) * 2000-04-19 2005-07-27 シャープ株式会社 Image processing device
US20030126138A1 (en) * 2001-10-01 2003-07-03 Walker Shirley J.R. Computer-implemented column mapping system and method
US7440902B2 (en) * 2002-04-12 2008-10-21 International Business Machines Corporation Service development tool and capabilities for facilitating management of service elements
US7356522B2 (en) * 2002-04-19 2008-04-08 Computer Associates Think, Inc. System and method for building a rulebase
US7188155B2 (en) * 2002-12-17 2007-03-06 International Business Machines Corporation Apparatus and method for selecting a web service in response to a request from a client device
US20040181670A1 (en) * 2003-03-10 2004-09-16 Carl Thune System and method for disguising data
US8020156B2 (en) * 2005-09-12 2011-09-13 Oracle International Corporation Bulk loading system and method
US7599944B2 (en) * 2005-12-16 2009-10-06 Microsoft Corporation Electronic data interchange (EDI) schema simplification interface
US20100318511A1 (en) * 2007-11-13 2010-12-16 VirtualAgility Techniques for connectors in a system for collaborative work
US8015139B2 (en) * 2007-03-06 2011-09-06 Microsoft Corporation Inferring candidates that are potentially responsible for user-perceptible network problems
US20080294624A1 (en) * 2007-05-25 2008-11-27 Ontogenix, Inc. Recommendation systems and methods using interest correlation
JP4518165B2 (en) * 2008-03-11 2010-08-04 富士ゼロックス株式会社 Related document presentation system and program
GB0809443D0 (en) * 2008-05-23 2008-07-02 Wivenhoe Technology Ltd A Type-2 fuzzy based system for handling group decisions
US8768976B2 (en) * 2009-05-15 2014-07-01 Apptio, Inc. Operational-related data computation engine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040250133A1 (en) * 2001-09-04 2004-12-09 Lim Keng Leng Albert Computer security event management system
US20040117151A1 (en) * 2002-12-14 2004-06-17 Perspicuous Insights Llc Interactive diagnostic system and method driven by expert system
US20070022194A1 (en) * 2003-09-25 2007-01-25 Brown David W Database event driven motion systems
US20090259629A1 (en) * 2008-04-15 2009-10-15 Yahoo! Inc. Abbreviation handling in web search
US20090273340A1 (en) * 2008-05-01 2009-11-05 Cory James Stephanson Self-calibrating magnetic field monitor
US20100274795A1 (en) * 2009-04-22 2010-10-28 Yahoo! Inc. Method and system for implementing a composite database

Also Published As

Publication number Publication date
US20120023261A1 (en) 2012-01-26
US8850071B2 (en) 2014-09-30

Similar Documents

Publication Publication Date Title
US10447712B2 (en) Systems and user interfaces for dynamic and interactive investigation of bad actor behavior based on automatic clustering of related data in various data structures
Schulz et al. A design space of visualization tasks
Sun et al. A survey of visual analytics techniques and applications: State-of-the-art research and future challenges
US8849840B2 (en) Quick find for data fields
Ceneda et al. Characterizing guidance in visual analytics
Heimerl et al. Visual classifier training for text document retrieval
Brehmer et al. Overview: The design, adoption, and analysis of a visual document mining tool for investigative journalists
Blomqvist The use of Semantic Web technologies for decision support–a survey
Cao et al. Dicon: Interactive visual analysis of multidimensional clusters
Alves et al. Requirements engineering for software product lines: A systematic literature review
Fleming et al. An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks
Ragan et al. Characterizing provenance in visualization and data analysis: an organizational framework of provenance types and purposes
JP5947910B2 (en) Method and apparatus for data analysis, and computer program
US7246328B2 (en) Method, computer program product, and system for performing automated linking between sheets of a drawing set
Mousseau et al. A user-oriented implementation of the ELECTRE-TRI method integrating preference elicitation support
US6693651B2 (en) Customer self service iconic interface for resource search results display and selection
US6873990B2 (en) Customer self service subsystem for context cluster discovery and validation
US7739221B2 (en) Visual and multi-dimensional search
Bertini et al. Surveying the complementary role of automatic data analysis and visualization in knowledge discovery
Billari Sequence analysis in demographic research
US6785676B2 (en) Customer self service subsystem for response set ordering and annotation
US8108398B2 (en) Auto-summary generator and filter
EP1672537B1 (en) Data semanticizer
Mäder et al. Towards automated traceability maintenance
US7705847B2 (en) Graph selection method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:LIAISON TECHNOLOGIES, INC.;REEL/FRAME:047950/0910

Effective date: 20170329

Owner name: LIAISON TECHNOLOGIES, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:048043/0532

Effective date: 20181217

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE LISTED PATENTS,NAMELY 5 NOS. LISTED AS PATENT NOS.(9590916,9344182,9650219,9588270,9294701),SHOULD BE LISTED AS APPLICATION NOS. PREVIOUSLY RECORDED ON REEL 047950 FRAME 0910. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:LIAISON TECHNOLOGIES, INC.;REEL/FRAME:051255/0831

Effective date: 20170329