CA2310333A1 - Data decomposition/reduction method for visualizing data clusters/sub-clusters - Google Patents

Data decomposition/reduction method for visualizing data clusters/sub-clusters Download PDF

Info

Publication number
CA2310333A1
CA2310333A1 CA002310333A CA2310333A CA2310333A1 CA 2310333 A1 CA2310333 A1 CA 2310333A1 CA 002310333 A CA002310333 A CA 002310333A CA 2310333 A CA2310333 A CA 2310333A CA 2310333 A1 CA2310333 A1 CA 2310333A1
Authority
CA
Canada
Prior art keywords
data
level
clusters
projection
visualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002310333A
Other languages
French (fr)
Inventor
Joseph Y. Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Catholic University of America
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2310333A1 publication Critical patent/CA2310333A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

Higher dimensionality data is subject to a hierarchical visualization to allow the complete data set to be visualized in a top-down hierarchy in terms of clusters and sub-clusters at deeper levels. The data set is subject to standard finite normal mixture models and probabilistic principal component projections, the parameters of which are estimated using the expectation-maximization and principal component analysis under the Akaike Information Criteria (AIC) and the Minimum Description Length (MDL) criteria. The high-dimension raw data is subject to processing using principal component analysis to reveal the dominant distribution of the data at a first level. Thereafter, the so-processed information is further processed to reveal sub-clusters within the primary clusters. The various clusters and sub-clusters at the various hierarchical levels are subject to visual projection to reveal the underlying structure. The inventive schema has utility in all applications in which high-dimensionality multi-variate data is to be reduced to a two- or theree-dimensional projection space to allow visual exploration of the underlying structure of the data set.

Description

DATA DECOMPOSITION/REDUCTION METHOD FOR
VISUALIZING DATA CLUSTERS/SUB-CLUSTERS
Backcrround Art The present invention relates generically to the field of data analysis and data presentation -' and, more particularly, to the analysis of data sets having higher dimensionality data points in order to optimally present the data in a lower dimensional order context, i.e., in a hierarchy of two- or three-dimensional visual contexts to reveal data structures within the data set.
The visualization of data sets having a large number of data points with multiple variables or attributes associated with each data point represents a complex problem. In general, there is no way, a priori, to easily identify groups or sub-groups of data points that have relational attributes such that structures and sub-structures existing within the data set can be visualized.
Various techniques have been developed for processing the data sets to reveal internal structures as an aid to understanding the data. In general, a large data set will oftentimes have data points that are multi-variant, that is, a single data point can have a multitude of attributes, including attributes that are completely independent from one another or have some degree of inter-attribute relationship or dependency.
Any elementary visualization process involving the projection of the data set onto a two-dimensional visualization space using straight-forward projection algorithms becomes progressively less adequate as the order of the data points increases. Thus, a single projection of a higher-order data set onto a visualization space may not be able to present all of the structures and sub-structures within the data set of interest in such a way that the structures or sub-structures can be visually distinguished or discriminated.
One form of presentation schema involves hierarchical visualization by which the data set is viewed at a highest-level, whole data set viewpoint.
Thereafter, features within the highest-level projection are identified in accordance with an algorithms) or other identification criteria and those next highest level features further processed to reveal their respective internal structure in another projection(s). This hierarchal process can be repeated for successive levels to present successively finer and detailed views of the data set. Thus, in a hierarchical visualization scheme, an image tree is provided with the successively lower images of the tree revealing more detail.
One such hierarchical data visualization scheme is disclosed by C. M. Bishop and M. E. Tipping in an Z5 article entitled "A Hierarchical Latent Variable Model for Data Visualization," IEEE Trans. Pattern Anal. Machine Intell., Vol. 20, No. 3, pp. 282-293, March 1998. Bishop and Tipping present a hierarchical visualization algorithm based on a two-dimensional hierarchical mixture of latent variable models, the parameters of which are estimated using the expectation-maximization (EM) algorithm. The construction of the hierarchical tree proceeds top down so that structure decomposition is driven interactively by the user and optimal projection is determined by the maximum likelihood principle. A ' hierarchy of multiple two-dimensional visualization spaces is provided with the top-level projection displaying the entire data set and successive lower-level projections displaying clusters within the data set displayed at the top-level. Further lower-level projections display sub-clusters and related internal structures within the data set.
Initially, the data set is subjected by Bishop and Tipping to a form of linear latent variable modelling to find a representation of the multi-dimensional data set in terms of two latent, or "hidden," variables that is determined indirectly from the data set. The modelling is similar to principal component analysis, but defines a probability density in the data space. In applying the Bishop and Tipping protocol, a single top-level latent variable model is generated with the posterior mean of each data point plotted in the latent space. Any cluster centers identified in this initial plot are used as the basis for initiating the next-lower level analysis leading to a mixture of the latent variable models.
There are two potential limitations associated with the Bishop and Tipping approach. First, although a probability density is defined in the data space through a latent variable model, the WO OO/i6250 PCT/US99/21363 prior and order of the mixture model are heuristically selected and an isotropic Gaussian conditional distribution is undesirably restricted, which may misrepresent the true data structures and put the optimality of the formulation in doubt.
Secondly, the parameters, including the optimal projections, are determined by maximum likelihood;
this criterion need not always lead to the most interesting or interpretable visualization plots.
Disclosure of Invention The present invention provides a data decomposition/reduction method for visualizing large sets of multi-variant data including the processing of the multi-variant data down to two- or three-dimensional space in order to optimally reveal otherwise hidden structures within the data set including the principal data cluster or clusters at a first or top level of processing and additional sub-clusters within the principal data clusters in ZO successive lower level visualizations. The identification of the morphology of clusters and subclusters and inter-cluster separation and relative positioning within a large data set allows investigation of the underlying drive that created ZS the data set morphology and the intra-data-set features.
The data set, constituted by a multitude of data points each having a plurality of attributes, is initially processed as a whole using multiple 30 finite normal mixture models and hierarchical visualization spaces to develop the multi-level data visualization and interpretation. The top-level model and its projection explain the entire data set revealing the presence of clusters and cluster relationships, while lower-level models and projections display internal structure within individual clusters, such as the presence of subclusters, which might not be apparent in the higher-level models and projections. With many complementary mixture models and visualization projections, each level is relatively simple while the complete hierarchy maintains overall flexibility while still conveying considerable structural information. The arrangement combines (a) minimax entropy modeling by which the models are determined and various parameters estimated and (b) principal component analysis to optimize structure decomposition and dimensionality reduction.
The present invention advantageously performs a probabilistic principal component analysis to ZO project the softly partitioned data space down to a desired two-dimensional visualization space to lead to an optimal dimensionality reduction allowing the best extraction and visualization of local clusters.
The minimax entropy principle is used to select the model structures and estimate its parameter values, where the soft partitioning of the data set results in a standard finite normal mixture model with minimum conditional bias and variance. By performing the principal component analysis and minimax entropy modeling alternatively, a complete hierarchy of complementary projections and refined models can be generated automatically, corresponding to a statistical description best fitted to the data.
The present invention treats structure decomposition and dimensionality reduction as two -separate but complementary operations, where the criterion used to optimize dimensionality reduction is the separation of clusters rather than the maximum likelihood approach of Bishop and Tipping.
The resulting projections, in turn, enhance the performance of structure decomposition at the next lower level.
Thereafter, a model selection procedure is applied to determine the number of subclusters inside each cluster at each level using an information theoretic criteria based upon the minimum of alternate calculations of the Akaike Information Critera (AIC) and the minimum description length (MDL) criteria. This determination allows the process of the present invention to automatically determine whether a further split of a subspace should be implemented or whether to terminate the further processing.
A probabilistic adaptive principal component extraction (PAPER) algorithm is also applied to estimate the desired number of principal axes. When the dimensionality of the raw data is high, this PAPER approach is computationally very efficient.
Lastly, the present invention defines a probability distribution in data space which naturally induces a corresponding distribution in projection space through a Radon transform. This defined probability distribution permits an independent procedure in determining values for the intrinsic model parameters without concurrent estimation of projection mapping matrix. ' In many data sets in which the data points are multi-varient, the underlying "drive" that give rise to the data points often form clusters of points because more than one variable may be a function of that same underlying drive.
In accordance with the present invention and as an initial step in processing the raw data set, the data set (designated herein as the t-space) is projected onto a single x-space (i.e., two-dimensional space), in which a descriptor W is determined from the sample covariance matrix Ct by fitting a single Gaussian model to the data set over t-space.
Thereafter, a value f(x) is determined for clusters K=1, 2 , ..., K~, in which the values of Irk and 9Xk are initialized by the user and estimated by maximizing the likelihood over x-space.
After f(x) is determined, values of the Akaike Information Criterion (AIC) and the minimum description length (MDL) for the various clusters K
- 1,2,...,K~ are calculated and a model selected with a Ko that corresponds to the minimum of the calculated values of the Akaike Information Criteria (AIC) and the minimum description length (MDL) criteria.

_ g_ The a value f (t) is then determined for Ko in which the values of Try , z;k, ~tx , and Ctk are further refined by maximizing the likelihood over t-space.
Wk is determined by directly evaluating the covariance matrix Ctk or learning from tik for k=1,2,...,Ko.
Thereafter, x;k or h [WTk (ti-mtk) 1 for k = l, 2, ..., Ko is plotted by projecting the data set onto multiple x-subspaces at the second level for visual evaluation by the user.
Then Gk(t) is determined by repeating the above process steps to thus construct multiple x-subspaces at the third level; the hierarchy is completed under the information theoretic criteria using the AIC and i5 the MDL and all x-space subspaces plotted for visual evaluation.
The present invention advantageously provides a data decomposition/reduction method for visualizing data clusters/sub-clusters within a large data space that is optimally effective and computationally efficient.
Other objects and further scope of applicability of the present invention will become apparent from the detailed description to follow, taken in conjunction with the accompanying drawings, in which like parts are designated by like reference characters.
Brief Description of the Drawings The present invention is described below, by way of example, with reference to the accompanying drawings, wherein:

_ g _ FIG. 1 is a schematic block diagram of a system for processing a raw multi-varient data set in accordance with the present invention;
FIG. 2 is a flow diagram of the process flow of the present invention;
FIG. 2A is an alternative visualization of the process flow of the present invention;
FIG. 3 is an example of the projection of a data set onto a 2-dimensional visualization space after determination of the principal axis;
FIG. 4A is a 2-dimensional visualization space of one of the clusters of FIG. 3;
FIG. 4B is a 2-dimensional visualization space of another of the clusters of FIG. 3;
FIG. 5 is an example of the projection of a data set onto a 2-dimensional visualization space after determination of the principal axis;
FIG. 6A is a 2-dimensional visualization space of one of the clusters of FIG. 5;
FIG. 6B is a 2-dimensional visualization space of a second of the clusters of FIG. 5; and FIG. 6C is a 2-dimensional visualization space of a third of the clusters of FIG. 5.
Best Mode fQrCarrvina Out the Invention A processing system for implementing the dimensionality reduction using probabilistic principal component analysis and structure decomposition using adaptive expectation maximization methods for visualizing data in accordance with the present invention is shown in FIG. 1 and designated generally therein by the reference character 10. As shown, the system 10 includes a working memory 12 that accepts the raw multi-varient data set, indicated at 14, and which bi-directionally interfaces with a processor 16.
The processor 16 processes the raw t-space data set 14 as explained in more detail below and presents that data to a graphical user interface (GUI) 18 which presents a two- or three- dimensional visual presentation to the user as also explained below.
If desired, a plotter or printer 20 (or other hard copy output device) can be provided to generate a printed record of the display output of the graphical user interface (GUI). The processor 16 may take the form of a software or firmware programed CPU, ALU, ASIC, or microprocessor or a combination thereof.
As the initial step in processing the raw data and as presented in FIG. 2, the data set is subject to a global principal component analysis to thereafter effect a top most projection. This step is initiated by determining the value of a variable W for the top-most projection in the hierarchy of projections. For relatively low dimensional data sets, W is directly found by evaluating the covariance matrix Ct. For higher dimensional data sets, only the top two eigenvectors of the covariance matrix of the data points are of interest; depending upon the dimensionality of the raw data, it may be computationally more efficient to apply the adaptive principal components extraction (APEX) algorithm described in Y. Wang, S.

H. Lin, H. Li, and S, Y. Kung, "Data mapping by probabilistic modular networks and information theoretic criteria," IEEE Trans. Signal Processing, Vol. 46, No. l2, pp. 3378-3397, December 1998 to find W directly from the raw data points ti. After the data set is projected and displayed by it principal component axis and n the basis of this single x-space and given a fixed K, the user then selects or identifies those points ,u,Xk on the plot corresponding to the centers of apparent clusters.
The two-step expectation maximization (EM) algorithm can be applied to allow a standard finite normal mixture model (SFNM), i.e., where Ko f(X> _ ~ ~k9(xle~k) k=1 EQ. 1 9(Xle~) = J' g(tletk)s(X -''w'rt +'wT~)att EQ . 2 and where the log-likelihood of projecting the data under the Radon Transform is ~ _ ~.~ i~ ,f (~) EQ. 3 The standard finite normal mixture (SFNM) modeling solution addresses the estimation of the regional parameters (~ck,At,~ and the detection of the structural parameter Ko in the relationship xo ~(t) _ ~, ~k9(tletk) k~l EQ. 4 based on the observations t. It has been shown that when Ko is given, the maximum likelihood (ML) estimate of the regional parameters can be obtained using the expectation maximization (EM) algorithm.
There are two observations with the described approach: when the dimension of the data space is high, the implementation of the expectation maximization (EM) algorithm is very complex and time consuming. Additionally, the initialization of the expectation maximization (EM) algorithm is heuristically chosen, this heuristic selection often leads to only a local optimal solution. Therefore, it is reasonable to consider the model parameter values being estimated, first, in the projected x-space and then further adjusted or fine tuned in the data t-space. One natural criterion used for determining the optimal parameter values is to minimize the distance between the standard finite normal mixture (SFNM) distribution f(x) and the data histogram fx. Relative entropy (Kullback-Leibler distance), as suggested by information theory, is a suitable measure of the reconstruction error, given by:
D(,fxl) ,f ) _ ~ fX(x) l08 f"(x) x f (x) EQ. 5 When relative entropy is used as a distance measure, distance minimization is equivalent to the maximum likelihood estimation, summarized by ~ _ ~(-N(H(.fX) + D(f~I if)l ) EQ. 6 where H is the entropy calculator described by Y.
Wang, S. H. Lin, H. Li, and S, Y. Kung in "Data mapping by probabilistic modular networks and information theoretic criteria," IEEE Trans. Signal Processing, Vol. 46, No. l2, pp. 3378-3397, December 1998.
The EM algorithm is implemented as a two-step process, i.e., the E-step and the M-step as follows:
E-step:
(n) _ ~'kT')9Ui,Ie f ~~~~k~) ~ 8~e EQ . 7 and the M-step:
(n+1) __ _I N (n) EQ . 8A ~k N t~I x~c /'dark 1) ' ~i x~kn)xt EQ. 8B ~'ix'k) G.,(»+1) _ ~1 x~(kn) ~xt - /~xk ~~Xi -" /,~xle)~T
~1 x~n) EQ. 8C
In each complete processing cycle, the previous set of parameter values is used to determine the (n) posterior probabilities x~k using the E-step equation. These posterior probabilities are then used to obtain the new set of values and ~~ 1) using the appropriate M-step equations. The processing is continued until a minima in the value D(fxllf>=~s l=)bsf" ") of the relative entropy X f(") is identified. This model selection criteria will determine the optimal number of K° values unless it is already at a local minimum. The model selection procedure will then determine the optimal number K°
of models to fit at the next level down in the hierarchy using the two information theoretic criteria, where Ka= 7K° - 1 (i.e., the values of Akaike's Information Criteria (AIC) and the Minimum Description Length (MDL) for K with selection of a model in which K corresponds to the minimum of the AIC and the MDL) . The resulting points ~tk~°' in data space, obtained by ~~ ~ WN.~ y .~. ~
EQ. 9 are then used as the initial means of the respective submodels. Since the mixing proportions ~,~ are projection-invariant, a 2 x 2 unit matrix is assigned to the remaining parameters of the covariance matrix Ctk. The expectation-maximization (EM) algorithm can be again applied to allow a standard finite normal matrix (SFNM) with K°
submodels to be fitted to the data over t-space.
The corresponding EM algorithm can be derived by replacing all x in the E-step and the M-step equations, above, by t.
With a soft partitioning of the data set to generate possible models for the next level projection using the EM algorithm, data points will now effectively belong to more than one cluster at any given level. Thus, the effective input values are tix = zik (tI - N~tx) for an independent visualization subspace k in the hierarchy. Ctx can be directly evaluated to obtain Wk as described above. However, when the determination of Wx is based on a neural network learning scheme, an algorithm, termed the probabilistic adaptive principal component extraction (PAPEX) is applied as follows.
The feedforward weight vector wmkand the feedback weight vector wmkare initialize to small random values at time i = 1 and a small positive value is assigned to the learning rate parameter r~.
For m=1 and for i=1,2,..., the value of ~/ik~$) = w' k~z)tsk EQ. 10 is computed where Wik(Z'f 1) = Wlk(Z) .+. ~~ik(Z)t~ "?hk(Z~w~(~')~
EQ. 11 For large values of i, wlx(i) -> wlx, where wlx is the eigenvector associated with the largest eigenvalue of the covariance matrix Cx. Thereafter ZO m is set equal to 2 and for i=1,2, ..., the following values are computed:
~(Z) = ~2TJe(i)t~ "f'. qk(i)~likti) EQ. 12 w~(z'f 1) ='~~(z) +~I~(~)t:~ -?~ak(a)w~(i)~
EQ. 13 and dk(i+1) = dk(i) -O~k(z)~ik(Z) +vak(i)ak(i)1 EQ. 14 For large values of i , w2k ( i ) --j w2k, where w2k is the eigenvector associated with the second largest eigenvalue of the covariance matrix Ck.
Having determined principal axes Wk of the mixture model at the second level, the visualization subspaces are then constructed by plotting each data point ti at the corresponding xik for k = 1, 2 , ... , Ko .
Thus if one particular point takes most of the contribution for a particular component, then that point will effectively be visible only on the corresponding subspace.
The determination of the parameters of the models at the third level can again be viewed as a two-step estimation problem, in which a further split of the models at the second level is determined within each of the subspaces over x-space, and then the parameters of the selected models are fine tuned over t-space. Based on the plot of xik, the learning of ck (x) can again be performed using the expectation-maximization (EM) algorithm and the model selection procedures described above. The third level EM algorithm has the same form as the EM algorithm at the second level, except that in the E-step, the posterior probability that a data point xi belongs to submodel j is given by _ _ ~j~k9(Xi~~~~k~
xi(ki) - ~xsilk - ~k ~'(xslexk) EQ. 15 where zik are constants estimated from the second level of the hierarchy. The corresponding M-step in the expectation maximization algorithm is then given by _ JJisl ~(k~) k EQ . 16 ~fl T ~sN---1 xik ~1 xi(k,3)x4 Nx(k,j) c-.N
EQ . 17 ~1 x'(ka) _ ~1 xi(ka)~xi, " ~(k,7)~~xi. ~k~)}T
EQ . 18 Cx(k~) ~~1'zi(k,j) Similarly, the resulting points in data space ~~k~7) -Wk~(k~)+fltk EQ. 19 are then used to initialize the means of the respective submodels, and the expectation maximization (EM) algorithm can be applied to allow a standard finite normal matrix (SFNM) distribution with K~ submodels to be fitted to the data over t-space. The formulation can be derived by simply replacing all x in the second level M-step by t.
With the resulting z; (k,~) in t-space, the PAPER
algorithm can be applied to estimate W(k,~), in which the effective input values are expressed by tt(ka) - zs(ka)~ti ~(k~')~
EQ . 2 0 The next level visualization subspace is generated by plotting each data point t; at the corresponding xi(ka) - zi(k~9)w(kn) ~t=
EQ. 21 values in the (k, j) subspace.
The construction of the entire tree structure hierarchy is automatically completed with the flow diagram of FIG. 2 ending when no further data split is recommended by the information theoretic criteria -(AIC and MDL) in all of the parent subspaces.
A first exemplary two-level implementation of the present invention is shown in FIGS. 3, 4A, and 4B in which the entire data set is present in the top level projection and two local clusters within that top level projection each individually presented in FIGS. 4A and 4B. AS shown in FIG. 3, the entire data set is subject to principal component analysis as described above to obtain the principal axis or axes (axis AX being representative) for the top level display.
Additionally, the axis (unnumbered) for each of the apparent clusters is displayed. Thereafter, the apparent centers of the two clusters are identified and the data subject to the aforementioned processing to further reveal the local cluster of FIG. 4A and the local cluster of FIG. 4B.
A second exemplary two-level implementation of the present invention is shown in FIGS. 5, 6A, 6B, and 6C in which the entire data set is present in the top level projection and three local clusters within that top level projection are each individually presented in FIGS. 6A, 6B, and 6C. As shown in FIG. 5, the entire data set is subject to principal component analysis as described above to obtain the principal axis (AX) and the axis (unnumbered) for each of the apparent clusters as displayed. The t-space raw data set arises from a mixture of three Gaussians consisting of 300 data points as presented in FIG. 5. As shown, two cloud-s like clusters are well separated while a third cluster appears spaced in between the two well-separated cloud-like clusters. By performing the same operations as described above, the second level visual space is generated with a mixture of two local principal component axis subspaces where the line AX indicates the global principal axis. When the two information theoretic criteria are applied {AIC and MDL) to examine these two cluster plots, the plot on the "right" of FIG. 5 shows evidence of further split. At the third level data modeling, a hierarchical model is adopted, which illustrates that there are indeed total three clusters within the data set, as shown in FIGS. 6A, 6B, and 6C.
An alternate visualization of the process of flow of the present invention is shown in FIG. 2A in which the data is input and structured and the high-level data set that this then subject to algorithmic processing to iteratively effect the data structure decomposition, dimensionality reduction, and multiple model selection using the AIC/MDL criteria and effect a best fit to for the next subsequent projection. Thereafter, extraction by the above-described probabilistic adaptive principal component processing and the radon transform is effect to thereafter generate the data cluster visualizations.

Industrial Applicability The present invention has use in all applications requiring the analysis of data, particularly multi-dimensional data, for the purpose S of optimally visualizing various underlying structures and distributions present within the universe of data. Applications include the detection of data clusters and sub-clusters and their relative relationships in areas of medical, industrial, geophysical imaging, and digital library processing, for example.
As will be apparent to those skilled in the art, various changes and modifications may be made to the illustrated data decomposition/reduction method for visualizing data clusters/sub-clusters of the present invention without departing from the spirit and scope of the invention as determined in the appended claims and their legal equivalent.
CROSS REFERENCE TO UNITED STATES PROVISIONAL
PATENT APPLICATION
This application claims the benefit of the filing date of co-pending U.S. Provisional Patent Application No. 60/100,622 filed on September 17, 1996 by the same inventor herein and entitled "Hierarchical Minimax Entropy Modeling and Visualization for Data Representation and Analysis/Interpretation," the disclosure of which is incorporated herein by reference.

Claims (13)

Claims:
1. A method of processing a data set of a multitude of data points each having a dimensionality greater than at least three to provide a hierarchy of visualizations of the data set into an at least two-dimensional space including a top-level visualization and at least one second level visualization presenting at least one cluster K of the top-level visualized therein, comprising the steps of:
providing, as the top-level visualization, a reduced dimension projection of the entire data set along at least a principal axis into an at least two-dimensional visualization space in which the dimensionality of the projected data set is reduced by principal component analysis of the data set to obtain a principal component projection axis;
selecting at least one point on said first-mentioned visualization space corresponding to centers of apparent clusters;
developing an optimum number of possible models for a second level projection;
determining the optimum number of local clusters K for the second level projection by alternately calculating the Akaike information criteria and the minimum description length and using the minimum of the Akaike information criterion and minimum description length to determine the optimum number of local clusters K;

determining the principal axes for visualization subspaces for the so-determined local clusters and projecting the data for at least one of the so-determined local clusters in a visualization space different from the first-mentioned visualization space.
2. The method of claim 1, wherein the principal component for the top-level projection is determined by directly evaluating the covariance matrix.
3. The method of claim 1, wherein the principal component for the top-level projection is determined by adaptive principal components extraction.
4. The method of claim 1, wherein the plurality of possible models for the next-to-top level projection are developed by successive cycles of the E-step and the M-step of expectation-maximization algorithm until a minimum relative entropy is attained.
5. A method of optimally processing a large data set of high dimensional (>3) data points to provide, by dimensional reduction, cluster analysis, and two-dimensional surface projection of a hierarchy of visual displays for the purpose of discerning data information relationships therein, comprising the steps of:

a. providing, through principal component analysis, a top level visualization as a projection of the entire data set onto a two-dimensional visualization space defined by its principal component projection axis;
b. selection by algorithm of an initial best estimate of data points on said first-mentioned visualization space corresponding to centers of apparent clusters;
c. developing an optimal number of possible models for a second level projection;
d. determining the optimal number of local clusters for the second level projections by calculating the minimum of the AIC or the MDL to determine the optimum number of second level clusters;
e. determining the principal component axis of each second level cluster for projection onto respective two-dimensional subspaces for display visualization; and f. repeating steps c, d, and a until no further data point clusters are algorithmically detectable.
6. A method of optimally processing a large data set of high dimensional (>3) data points to provide, by dimensional reduction, cluster analysis and two-dimensional surface projection of a hierarchy of visual displays for the purpose of discerning data information relationships therein, comprising the steps of:
a. providing, through principal component analysis, a top level visualization as a projection of the entire data set onto a two-dimensional visualization space defined by its principal component projection axis;

b. heuristically selecting, from multiple competing choices, the initial best estimates of data points on said first-mentioned visualization space corresponding to centers of apparent clusters;
c. developing an optimal number of possible models for a second level projection;
d. determining the optimal number of local clusters for the second level projections by calculating the minimum of the AIC or the MDL to determine the optimum number of second level clusters;
e. determining the principal component axis of each second level cluster for projection onto respective two-dimensional subspaces for display visualization; and f. repeating steps c, d, and e until no further data point clusters are heuristically detectable.
7. A system for processing a data set of a multitude of data points each having a dimensionality greater than at least three to provide a hierarchy of visualizations of the data set into an at least two-dimensional space including a top-level visualization and at least one second level visualization presenting at least one cluster K of the top-level visualized therein, characterized by:
a processor having a cooperating memory containing a data set of a multitude of data points each having a dimensionality greater than at least three;

a display for presenting one or more visualizations of the data set as processed by the processor;
the processor providing, as the top-level visualization on the display, a reduced dimension projection of the entire data set along at least a principal axis into an at least two-dimensional visualization space in which the dimensionality of the projected data set is reduced by principal component analysis of the data set to obtain a principal component projection axis;
the processor selecting at least one point on said first-mentioned visualization space corresponding to centers of apparent clusters;
the processor thereafter developing an optimum number of possible models for a second level projection;
the processor determining the optimum number of local clusters K for the second level projection by alternately calculating the Akaike information criteria and the minimum description length and using the minimum of the Akaike information criterion and minimum description length to determine the optimum number of local clusters K;
the processor determining the principal axes for visualization subspaces for the so-determined local clusters and projecting the data for at least one of the so-determined local clusters in a visualization space on the display different from the first-mentioned visualization space.
8. The system of claim 7, wherein the principal component for the top-level projection is determined by directly evaluating the covariance matrix.
9. The system of claim 8, wherein the principal component for the top-level projection is determined by adaptive principal components extraction.
10. The method of claim 7, wherein the plurality of possible models for the next-to-top level projection are developed by successive cycles of the E-step and the M-step of expectation-maximization algorithm until a minimum relative entropy is attained.
11. A system for optimally processing a large data set of high dimensional (>3) data points to provide, by dimensional reduction, cluster analysis, and two-dimensional surface projection of a hierarchy of visual displays for the purpose of discerning data information relationships therein, characterized by:
a processor having a cooperating memory containing a data set of a multitude of data points each having a dimensionality greater than at least three and a display for presenting one or more visualizations of the data set as processed by the processor;

the processor, through principal component analysis, providing a top level visualization as a projection of the entire data set onto a two-dimensional visualization space in the display and defined by its principal component projection axis;
the processor selecting by algorithm an initial best estimate of data points on said first-mentioned visualization space corresponding to centers of apparent clusters;
the processor developing an optimal number of possible models for a second level projection;
the processor determining the optimal number of local clusters for the second level projections by calculating the minimum of the AIC or the MDL to determine the optimum number of second level clusters;
the processor determining the principal component axis of each second level cluster for projection onto respective two-dimensional subspaces for visualization on the display.
12. A system for optimally processing a large data set of high dimensional (>3) data points to provide, by dimensional reduction, cluster analysis and two-dimensional surface projection of a hierarchy of visual displays for the purpose of discerning data information relationships therein, characterized by:
a processor having a cooperating memory containing a data set of a multitude of data points each having a dimensionality greater than at least three and a display for presenting one or more visualizations of the data set as processed by the processor;

the processor effecting a principal component analysis of the data to provide a top level visualization as a projection of the entire data set onto a two-dimensional visualization space of the display and defined by its principal component projection axis;
the processor, in response to a heuristic selection entered by a user, providing an initial best estimate of data points on said first-mentioned visualization space corresponding to centers of apparent clusters;
the processor thereafter developing an optimal number of possible models for a second level projection;
the processor determining the optimal number of local clusters for the second level projections by calculating the minimum of the AIC or the MDL to determine the optimum number of second level clusters; and the processor determining the principal component axis of each second level cluster for projection onto respective two-dimensional subspaces for display visualization by the display.
13. A computer automated process for generating a hierarchy of minimax entropy models and optimum visualization projections for high dimensional space data to improve data representation and interpretation, characterized by:
structurally decomposing a high dimensional space data utilizing minimax entropy principles to develop a statistical framework for model identification to an optional number and kernel shape of local clusters from said data;
dimensionally reducing said high dimensional space data by combining minimax entropy principles and principal component analysis to optimize data structure decomposition;

iteratively and separately performing principal component analysis and minimax entropy model identification to generate a hierarchy of complementary projections and models to develop an intrinsic model to best-fit the high dimensional space data; and creating a substantially reduced dimensional visualization space to facilitate better data representation and interpretation of said data.
CA002310333A 1998-09-17 1999-09-17 Data decomposition/reduction method for visualizing data clusters/sub-clusters Abandoned CA2310333A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US10062298P 1998-09-17 1998-09-17
US60/100,622 1998-09-17
US39842199A 1999-09-17 1999-09-17
PCT/US1999/021363 WO2000016250A1 (en) 1998-09-17 1999-09-17 Data decomposition/reduction method for visualizing data clusters/sub-clusters
US09/398,421 1999-09-17

Publications (1)

Publication Number Publication Date
CA2310333A1 true CA2310333A1 (en) 2000-03-23

Family

ID=26797375

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002310333A Abandoned CA2310333A1 (en) 1998-09-17 1999-09-17 Data decomposition/reduction method for visualizing data clusters/sub-clusters

Country Status (5)

Country Link
EP (1) EP1032918A1 (en)
JP (1) JP2002525719A (en)
AU (1) AU5926299A (en)
CA (1) CA2310333A1 (en)
WO (1) WO2000016250A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11847132B2 (en) 2019-09-03 2023-12-19 International Business Machines Corporation Visualization and exploration of probabilistic models

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2453608C (en) 2003-12-17 2007-11-06 Ibm Canada Limited - Ibm Canada Limitee Estimating storage requirements for a multi-dimensional clustering data configuration
JP4670010B2 (en) * 2005-10-17 2011-04-13 株式会社国際電気通信基礎技術研究所 Mobile object distribution estimation device, mobile object distribution estimation method, and mobile object distribution estimation program
US8239379B2 (en) * 2007-07-13 2012-08-07 Xerox Corporation Semi-supervised visual clustering
US20090232388A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data by creation of filtered density images
JP5332647B2 (en) * 2009-01-23 2013-11-06 日本電気株式会社 Model selection apparatus, model selection apparatus selection method, and program
US9424337B2 (en) 2013-07-09 2016-08-23 Sas Institute Inc. Number of clusters estimation
US9202178B2 (en) 2014-03-11 2015-12-01 Sas Institute Inc. Computerized cluster analysis framework for decorrelated cluster identification in datasets
CN105447001B (en) * 2014-08-04 2018-12-14 华为技术有限公司 Methods of Dimensionality Reduction in High-dimensional Data and device
JP6586764B2 (en) * 2015-04-17 2019-10-09 株式会社Ihi Data analysis apparatus and data analysis method
US9996543B2 (en) 2016-01-06 2018-06-12 International Business Machines Corporation Compression and optimization of a specified schema that performs analytics on data within data systems
US11164106B2 (en) * 2018-03-19 2021-11-02 International Business Machines Corporation Computer-implemented method and computer system for supervised machine learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11847132B2 (en) 2019-09-03 2023-12-19 International Business Machines Corporation Visualization and exploration of probabilistic models

Also Published As

Publication number Publication date
EP1032918A1 (en) 2000-09-06
WO2000016250A1 (en) 2000-03-23
JP2002525719A (en) 2002-08-13
AU5926299A (en) 2000-04-03

Similar Documents

Publication Publication Date Title
CN109964222B (en) System and method for processing an input point cloud having a plurality of points
Stanford et al. Finding curvilinear features in spatial point patterns: principal curve clustering with noise
Tirandaz et al. A two-phase algorithm based on kurtosis curvelet energy and unsupervised spectral regression for segmentation of SAR images
Han et al. Rotation-invariant and scale-invariant Gabor features for texture image retrieval
Singh et al. Topological methods for the analysis of high dimensional data sets and 3d object recognition.
Bertozzi et al. Diffuse interface models on graphs for classification of high dimensional data
Froyen et al. Bayesian hierarchical grouping: Perceptual grouping as mixture estimation.
Li et al. Exploring compositional high order pattern potentials for structured output learning
CA2310333A1 (en) Data decomposition/reduction method for visualizing data clusters/sub-clusters
Krasnoshchekov et al. Order-k α-hulls and α-shapes
Vikjord et al. Information theoretic clustering using a k-nearest neighbors approach
CN111062428A (en) Hyperspectral image clustering method, system and equipment
Allassonniere et al. A stochastic algorithm for probabilistic independent component analysis
EP3345128B1 (en) Clustering images based on camera fingerprints
Vyas et al. Automated texture analysis with gabor filter
EP1206752A1 (en) Visualization method and visualization system
CN111753921A (en) Hyperspectral image clustering method, device, equipment and storage medium
Vilalta et al. An efficient approach to external cluster assessment with an application to martian topography
Hennig et al. Validating visual clusters in large datasets: fixed point clusters of spectral features
Olsson et al. Improved spectral relaxation methods for binary quadratic optimization problems
CN109978066B (en) Rapid spectral clustering method based on multi-scale data structure
Sikka et al. Comparison of algorithms for ultrasound image segmentation without ground truth
Guizilini et al. Iterative continuous convolution for 3d template matching and global localization
Li Unsupervised texture segmentation using multiresolution Markov random fields
Lee et al. Deterministic Hypothesis Generation for Robust Fitting of Multiple Structures

Legal Events

Date Code Title Description
FZDE Discontinued