U.S. patent application number 10/478191 was filed with the patent office on 2005-03-31 for model selection for cluster data analysis.
Invention is credited to Ben-Hur, Asa, Elisseeff, Andre, Guyon, Isabelle.
Application Number | 20050071140 10/478191 |
Document ID | / |
Family ID | 26967221 |
Filed Date | 2005-03-31 |
United States Patent
Application |
20050071140 |
Kind Code |
A1 |
Ben-Hur, Asa ; et
al. |
March 31, 2005 |
Model selection for cluster data analysis
Abstract
A model selection method is provided for choosing the number of
clusters, or more generally the parameters of a clustering
algorithm. The algorithm is based on comparing the similarity
between pairs of clustering runs on sub-samples or other
perturbations of the data. High pairwise similarities show that the
clustering represents a stable pattern in the data. The method is
applicable to any clustering algorithm, and can also detect lack of
structure. We show results on artificial and real data using a
hierarchical clustering algorithm.
Inventors: |
Ben-Hur, Asa; (Seattle,
WA) ; Elisseeff, Andre; (Thalwil, CH) ; Guyon,
Isabelle; (Berkeley, CA) |
Correspondence
Address: |
PROCOPIO, CORY, HARGREAVES & SAVITCH LLP
530 B STREET
SUITE 2100
SAN DIEGO
CA
92101
US
|
Family ID: |
26967221 |
Appl. No.: |
10/478191 |
Filed: |
November 1, 2004 |
PCT Filed: |
May 17, 2002 |
PCT NO: |
PCT/US02/15666 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60292221 |
May 18, 2001 |
|
|
|
60335990 |
Nov 30, 2001 |
|
|
|
Current U.S.
Class: |
703/11 |
Current CPC
Class: |
G06F 16/35 20190101;
G06K 9/6218 20130101; G16B 25/10 20190201; G16B 25/00 20190201;
G16B 40/00 20190201; G16B 40/20 20190201; G16B 40/30 20190201 |
Class at
Publication: |
703/011 |
International
Class: |
G06G 007/48; G06G
007/58 |
Claims
1. A method for clustering data in a dataset comprising: selecting
a plurality of granularity levels k; applying a clustering
algorithm to a sub-sample of the dataset so that k clusters are
produced; for each k, selecting a plurality of sub-samples of the
dataset; selecting a plurality of pairs of sub-samples calculating
a similarity between the plurality of pairs; determining a
distribution of the similarity between the plurality of pairs;
comparing the distributions for all k; and selecting as the optimum
granularity level, the k corresponding to the tightest
distribution.
2. The method of claim 1, wherein the clustering algorithm is A
k-means algorithm.
3. The method of claim 1, wherein the clustering algorithm is
hierarchical clustering.
4. The method of claim 1, wherein the step of selecting the highest
granularity level comprises selecting a number of clusters for
which there is a transition from a distribution that is peaked near
1 to a wide distribution.
5. The method of claim 1, further comprising, after determining the
distribution, calculating a cumulative distribution function for
each granularity level, wherein the step of selecting the highest
granularity level comprises identifying an increase in the area
under the cumulative distribution function.
6. The method of claim 1, wherein the data comprises gene
expression coefficients.
7. A method for analysis of data in a dataset comprising: selecting
a plurality of granularity levels in a clustering algorithm; for
each granularity level: inducing a perturbation in the data;
selecting a plurality of pairs of perturbations; computing a
pairwise similarity for each plurality of pairs; determining a
distribution of the pairwise similarity; selecting the highest
granularity level having the tightest distribution.
8. The method of claim 7, wherein the step of inducing comprises
grouping the data into a plurality of sub-samples.
9. The method of claim 8 wherein the step of grouping comprises
applying a k-means clustering algorithm to the data.
10. The method of claim 8 wherein the step of grouping comprises
applying a hierarchical clustering algorithm to the data.
11. The method of claim 7, wherein the step of inducing a
perturbation comprises initialization of the k-means algorithm.
12. The method of claim 7, wherein the step of inducing a
perturbation comprises adding random noise to the data.
13. The method of claim 7, wherein the step of selecting the
highest granularity level comprises selecting a number of clusters
for which there is a transition from a distribution that is peaked
near 1 to a wide distribution.
14. The method of claim 7, further comprising, after determining
the distribution, calculating a cumulative distribution function
for each granularity level, wherein the step of selecting the
highest granularity level comprises identifying an increase in the
area under the cumulative distribution function.
15. The method of claim 7, wherein the data comprises gene
expression coefficients.
16. A method for clustering data comprising a plurality of
structured patterns, the method comprising: (a) selecting a
clustering algorithm based on a dissimilarity measure between pairs
of structured patterns; (b) randomly assigning class labels to the
structured patterns; (c) defining a plurality of cluster centers by
averaging structured patterns within each labeled class; (d)
measuring dissimilarity between each structured pattern and its
corresponding cluster center by measuring a residual of a fit of
one structured pattern onto another structured pattern; (e)
reassigning structured patterns to the labeled class with the most
similar cluster center; and (f) repeating steps (c) through (e)
until assignment of structured patterns to the labeled classes
remains constant.
17. The method of claim 16, wherein step (d) comprises using a fit
that is invariant with respect to affine transformations.
18. The method of claim 17, wherein the affine transformations
comprise a combination of translation, scaling and rotation.
19. The method of claim 16, further comprising calculating an
average structured pattern, wherein the residual is an average
residual of the fit of each structured pattern to the average
structured pattern.
20. The method of claim 16, wherein step (d) comprises measuring a
Euclidean distance and a residual of a fit of one structured
pattern onto another structured pattern.
21. The method of claim 16, further comprising, after step (c),
clustering the cluster centers.
22. The method of claim 16, wherein the clustering algorithm is a
k-means algorithm.
23. The method of claim 16, wherein the structured patterns are
temporal profiles.
24. The method of claim 23, wherein the temporal profiles are gene
expression profiles.
25. A method for clustering patterns in a dataset comprising:
selecting a plurality of granularity levels k; selecting a
clustering algorithm adapted for producing a number of cluster
centers equal to k and for assigning patterns to clusters
corresponding to the cluster centers; for each granularity level k:
(a) inducing perturbations in the dataset to generate a modified
dataset; (b) applying the clustering algorithm to the at least one
modified dataset to produce k clusters under each of the
perturbations; (c) creating a new dataset comprising the cluster
centers identified in step (b); (d) applying the clustering
algorithm to the new dataset using the same value of k clusters;
(e) determining the stability of the clusterings at each
granularity level k by measuring dissimilarity between data in the
new dataset and the cluster center for the cluster into which the
data was assigned; measuring fit of the data to the cluster centers
for all k; and selecting as the optimum granularity level the k
corresponding to the best fit.
26. The method of claim 25, wherein the perturbations comprise a
combination of one or more of sub-sampling the dataset, changing
initialization of the clustering algorithm, and adding noise to the
dataset.
27. The method of claim 25, wherein the dataset comprises gene
expression profiles.
Description
FIELD OF THE INVENTION
[0001] The present invention relates to the use of learning
machines to identify relevant patterns in datasets containing large
quantities of diverse data, and more particularly to a method and
system for unsupervised learning for determining an optimal number
of data clusters into which data can be divided to best enable
identification of relevant patterns.
BACKGROUND OF THE INVENTION
[0002] Knowledge discovery is the most desirable end product of
data collection. Recent advancements in database technology have
lead to an explosive growth in systems and methods for generating,
collecting and storing vast amounts of data. While database
technology enables efficient collection and storage of large data
sets, the challenge of facilitating human comprehension of the
information in this data is growing ever more difficult With many
existing techniques the problem has become unapproachable. Thus,
there remains a need for a new generation of automated knowledge
discovery tools.
[0003] As a specific example, the Human Genome Project has
completed sequencing of the human genome. The complete sequence
contains a staggering amount of data, with approximately 31,500
genes in the whole genome. The amount of data relevant to the
genome must then be multiplied when considering comparative and
other analyses that are needed in order to make use of the sequence
data. As an illustration, human chromosome 20 alone comprises
nearly 60 million base pairs. Several disease-causing genes have
been mapped to chromosome 20 including various autoimmune diseases,
certain neurological diseases, type 2 diabetes, several forms of
cancer, and more, such that considerable information can be
associated with this sequence alone.
[0004] One of the more recent advances in determining the
functioning parameters of biological systems is the analysis of
correlation of genomic information with protein functioning to
elucidate the relationship between gene expression, protein
function and interaction, and disease states or progression.
Proteomics is the study of the group of proteins encoded and
regulated by a genome. Genomic activation or expression does not
always mean direct changes in protein production levels or
activity. Alternative processing of mRNA or post transcriptional or
post-translational regulatory mechanisms may cause the activity of
one gene to result in multiple proteins, all of which are slightly
different with different migration patterns and biological
activities. The human proteome is believed to be 50 to 100 times
larger than the human genome. Currently, there are no methods,
systems or devices for adequately analyzing the data generated by
such biological investigations into the genome and proteome.
[0005] Clustering is a widely used approach for exploratory data
analysis. Clustering analysis is unsupervised learning--it is done
without suggestion from an external supervisor; classes and
training examples are not given a priori. The objective of
clustering is to group data points into "meaningful" subsets. There
is no agreed upon definition of the clustering problem, and various
definitions appear in the literature. For example, clustering has
been defined as a search for some "natural" or "inherent" grouping
of the data. However, most clustering algorithms do not address
this problem. The vast majority of clustering algorithms produce as
their output either a dendogram or a partition into a number of
clusters, where the number of clusters is either the input, or
there is some other parameter(s) that controls the number of
clusters. In either case, a model selection technique is required
in order to choose the model parameter, or in the case of
hierarchical algorithms, to determine which level of the dendogram
represents the "inherent" structure of the data.
[0006] A few examples of applications of clustering include (1)
analysis of microarray data, where co-expressed genes are found,
and the assumption is that co-expression might be a sign of
co-regulation; (2) in medical datasets (gene expression data,
clinical data etc.), where patients are divided into categories;
(3) in any set or set of measurements to detect trends or artifacts
in the measurement protocol; and (4) in information retrieval to
partition text according to categories.
[0007] Most clustering algorithms either produce a hierarchical
partitioning of the data into smaller and smaller clusters, or
produces a partition of a dataset into a number of clusters that
depend on some input parameter (the number of clusters or some
other parameter(s)). The question remains, however, of how to set
the input parameter, or how to determine which level of the tree
representation of the data to look at: Clustering algorithms are
unsophisticated in that they provide no insight into the level of
granularity at which the "meaningful" clusters might be found.
Occasionally, there may be prior knowledge about the domain that
facilitates making such a choice. However, even in such cases, a
method for determining the granularity at which to look at the data
is required. This is seen as the problem of finding the optimal
number of clusters in the data, relative to some clustering
algorithm.
[0008] E. Levine and E. Domany in "Resampling Method for
Unsupervised Estimation of Cluster Validity", Neural Comp. 13,
2573-2593 (2001), assign a figure of merit to a clustering solution
according to its similarity to clusterings of subsamples of the
data. The "temperature" parameter of their clustering algorithm is
selected according to a maximum of the similarity measure. However,
in real data, such a maximum does not often occur. Other model
selection techniques have difficulty detecting the absence of
structure in the data, i.e., that there is a single cluster.
Further, many of algorithms make assumptions as to cluster shape,
and do not perform well on real data, where the cluster shape is
generally not known. Accordingly, other methods for clustering are
needed. The present invention is directed to such a method.
SUMMARY OF THE INVENTION
[0009] In an exemplary embodiment, a system and method are provided
for analysis of data by grouping the input data points into
meaningful subsets of data. The exemplary system comprises a
storage device for storing at least one data set, and a processor
for executing the clustering algorithm. In the preferred
embodiments, the clustering algorithm is a k-means or hierarchical
clustering algorithm.
[0010] In the method of the present invention, a "good granularity
level", i.e., an optimal number of clusters, is defined as one at
which a clustering solution is stable with respect to some
perturbation of the data, such as noise or sub-sampling. For each
level of granularity, the algorithm chooses a number of pairs of
sub-samples or other perturbations, clusters each subsample with
the chosen level of granularity, then computes a similarity between
pairs of clustering solutions. For each level of granularity, the
distribution of the similarity between pairs of clustering
solutions is computed, then the highest level of granularity for
which highly similar solutions are obtained under perturbation is
chosen.
[0011] According to the present invention, the probability
distribution of the similarity measure is evaluated in order to
find the "true" number of clusters based on a similarity measure
between clustering solutions. A dot product between pairs of
clustering solutions is normalized to provide a similarity measure.
Other similarity measures proposed in the literature can also be
expressed in terms of this dot product.
[0012] In one embodiment, the inventive method provides means for
extraction of information from gene expression profiles, in
particular, extraction of the most relevant clusters of temporal
expression profiles from tens of thousands of measured profiles. In
one variation of the present embodiment, the clustering algorithm
is the k-means algorithm. Other embodiments include all pairwise
clustering methods (e.g., hierarchical clustering) and support
vector clustering (i.e., a support vector machine (SVM)). In one
variation, the fit involves affine transformations such as
translation, rotation, and scale. Other transformations could be
included, such as elastic transformations. In the case of k-means,
the algorithm is applied in a hierarchical manner by sub-sampling
the data and clustering the sub-samples. The resulting cluster
centers for all the runs are then clustered again. The resulting
cluster centers are considered to be the most significant profiles.
To facilitate the clustering task, a ranking of the genes is first
performed according to a given quality criterion combining saliency
(significant difference in expression in the profile), smoothness,
and reliability (low noise level). Other criteria for ranking
include the local density of examples.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] FIGS. 1a-1c are a plot of data which is a mixture of four
Gaussians, a histogram of the correlation score for the Gaussian
data mixture, and a plot showing the overlay of cumulative
distributions of the correlation score, respectively.
[0014] FIGS. 2a and 2b are histogram of the correlation score for
the yeast gene expression data and an overlay of the cumulative
distributions of the correlation score, respectively.
[0015] FIGS. 3a and 3b are a histogram of the correlation score for
208 points uniformly distributed on the unit cube and an overlay of
the cumulative distributions of the correlation score,
respectively.
[0016] FIGS. 4a-4c are a plot of the first two principle
components, the histogram of the correlation score, and an overlay
of the cumulative distributions of the correlation score; k=2
(rightmost) to k-10 (leftmost), respectively.
[0017] FIG. 5 is a graphic representation of gene
pre-selection.
[0018] FIG. 6 is a graph of the average profiles of the clusters of
the 8 clustering runs and their grouping into meta-clusters.
[0019] FIG. 7 is a graph of curve fitting with affine
transformations.
[0020] FIG. 8 is a graph of curve fitting with affine
transformations, clustering obtained from the top 800 best-quality
genes.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0021] A typical computer system for running the inventive
clustering algorithm is a Pentium.RTM.-class processor with an
interface for inputting the dataset(s) of interest and a memory for
storing the dataset. Alternative processors include configurable
logic processors made up of SRAM-based field programmable gate
arrays (FPGAs) or configurable system on chip (CSOC) architectures,
which include a processor and an array of configurable logic cells
on a single chip, either of which can be used to accelerate
computer-intensive operations such as clustering algorithms on
large datasets. In addition, parallel implementations in
multi-computers have been used for executing clustering algorithms
for extracting knowledge from large-scale data repositories.
Selection of an appropriate computer system for executing the
inventive clustering method is within the level of skill in the
art.
[0022] The clustering model selection algorithm works with the help
of a scoring function that provides a similarity measure between
two labelings.
[0023] Let X={x.sub.1, . . . , x.sub.n}, and x.sub.i.di-elect
cons.R.sup.d be the dataset to be clustered. A labeling L is a
partition of X into k subsets S.sub.1, . . . , S.sub.k. It can be
represented by a function c: X.fwdarw.{1, . . . , k} where
c(x.sub.i) is the cluster to which x.sub.i belongs.
[0024] A less compact representation of a labeling, which may be
useful is the following representation by a matrix C with
components: 1 C ij = { 1 if x i and x j belong to the same cluster
, 0 otherwise ( 1 )
[0025] Note that this representation is label independent, i.e.
there is no need to assign each point a label in {1, . . . , k}.
This may be viewed as an adjacency matrix, where each cluster is a
connected component of the graph. This representation can also be
converted into a representation of soft cluster labeling.
[0026] Let labelings L.sub.1 and L.sub.2 have matrix
representations C.sup.(1) and C.sup.(2) respectively, so that 2 ( 1
, 2 ) = i , j C ij ( 1 ) C ij ( 2 ) ( 2 )
[0027] This dot product computes the number of pairs of vectors
clustered together, and can also be interpreted as the number of
common edges in graphs represented by C.sup.(1) and C.sup.(2). A
nave method for computing the dot product by going over all pairs
of points has complexity O(n.sup.2). However, it can be computed in
linear time.
[0028] As a dot product, L.sub.1, L.sub.2 satisfies the
Cauchy-Schwartz inequality: L.sub.1, L.sub.2.ltoreq.{square
root}{square root over (L.sub.1, L.sub.1L.sub.2, L.sub.2)}. The
correlation score, which is a normalized version of the dot
product, is: 3 cor ( 1 , 2 ) = 1 , 2 1 , 1 2 , 2 ( 3 )
[0029] A. K. Jain and R. C. Dubes, Algorithms for clustering data
(Prentice Hall, Englewood Cliffs, N.J., 1988) provide a number of
scores for comparing a labeling produced by a clustering algorithm
with a "gold standard" labeling. This technique is known as
"external" validation. In contrast, the present invention proposes
the use of scoring functions for "internal" validation that do not
require the "gold standard". In the following, it is shown that two
commonly used scoring functions can be expressed in terms of the
dot product defined above in Equation 2. Given two matrices
C.sup.(1), C.sup.(2) with 0-1 entries, let N.sub.ij i,j.di-elect
cons.{0, 1} be the number of entries on which C.sup.(1) and
C.sup.(2) have values i and j, respectively. Define the matching
coefficient as the fraction of entries on which the two matrices
agree: 4 M ( 1 , 2 ) = N 00 + N 11 N 00 + N 01 + N 10 + N 11 . ( 4
)
[0030] The Jaccard coefficient is the corresponding ratio when
"negative" matches are ignored: 5 J ( 1 , 2 ) = N 11 N 01 + N 10 +
N 11 . ( 5 )
[0031] The Jaccard coefficient is more appropriate when the
clusters are relatively small, since in that case, the N.sub.00
term will be the dominant factor even if the solution is far from
the true one. These scores can be expressed in terms of the
labeling dot product and the associated norm according to the
following proposition:
[0032] Let C.sup.(1) and C.sup.(2) be the matrix representations of
labelings L.sub.1 and L.sub.2, respectively. Then: 6 J ( 1 , 2 ) =
< C ( 1 ) , C ( 2 ) > < C ( 1 ) , C ( 1 ) > + < C (
2 ) , C ( 2 ) > - < C ( 1 ) , C ( 2 ) > M ( 1 , 2 ) = 1 -
1 n 2 ; C ( 1 ) - C ( 2 ) r; 2
[0033] This is a result of the observation that N.sub.11=C.sup.(1),
C.sup.(2), N.sub.01=1.sub.n-C.sup.(1),C.sup.(2),
N.sub.10=C.sup.(1), 1.sub.n-C.sup.(2), N.sub.00=1.sub.n-C.sup.(1),
1.sub.n-C.sup.(2), where 1.sub.n is an n.times.n matrix with
entries equal to 1. The above expression for the Jaccard
coefficient shows that it is close to the correlation score.
[0034] The following list provides the routine for the model
explorer algorithm according to the present invention:
[0035] Input: A dataset D, k.sub.min, k.sub.max
[0036] Require: A clustering algorithm, cluster (D, k)
[0037] A scoring function for label comparison, s(L.sub.1,
L.sub.2)
[0038] f=0.8
[0039] for k=k.sub.min to k.sub.max do
[0040] for i=1 to maximum iterations do
[0041] sub.sub.1=subsamp(D,f) {a sub-sample with fraction f of the
data}
[0042] sub.sub.2=subsamp(D,J)=
[0043] L.sub.1=cluster (sub.sub.1, k)
[0044] L.sub.2=cluster (sub.sub.2, k)
[0045] Intersect=sub.sub.1.andgate.sub.sub.2
[0046] Score (i, k)=s(L.sub.1(Intersect), L.sub.2(Intersect))
{Compute the score on the intersection of the two labels}
[0047] end for
[0048] end for
[0049] When one looks at a cloud of data points, and at a subsample
of it for a sampling ratio, f (fraction of points sampled) is not
much smaller than 1 (say f>0.5), one usually observes the same
general structure. Thus, it is also reasonable to postulate that a
clustering algorithm has captured the inherent structure in a
dataset if clustering solutions over different subsamples are
similar, e.g., according to one of the similarity measures
introduced in the previous section. Thus, "inherent structure" is
defined as structure that is stable under sub-sampling (or
alternatively, perturbing the data). Given a clustering algorithm
and a dataset, this translates into a search for the best number of
clusters to use in the particular instance. Note that one can also
extend the search to look for a set of features where structure is
apparent, however, in this case, the same set of features is
kept.
[0050] The inventive clustering algorithm receives as input a
dataset (or similarity/dissimilarity matrix) and a parameter k that
controls either directly or indirectly the number of clusters that
the algorithm produces. This convention is applicable to
hierarchical clustering algorithms as well: given k, the tree is
cut so that k clusters are produced. Next, characterize the
stability of the clustering for each k. This is accomplished by
producing a set of clusterings of sub-samples of the data, and
comparing the labels of the intersection of pairs of clusterings
using, for example, the correlation similarity measure. This is
performed for increasing values of k (see above for details). The
distribution of the scores for the different values of k is then
compared (see, e.g., FIGS. 1a-1e). The idea is that when there is
structure in the data that is well described by the clustering
algorithm with that particular value of k, many sub-samples will
produce similar clusterings, and their pairwise similarity score
will be concentrated close to 1.
[0051] Each sub-sample contains a fixed fraction of the data, f The
actual subsampling can be implemented in various ways:
[0052] 1. Select each sample independently so that the size of the
intersection between two samples is random.
[0053] 2. Select together pairs of samples by first selecting their
intersection, then selecting the rest of the data to complete the
fraction f.
[0054] 3. Fix one clustering solution to be one produced on the
whole dataset (This third option was used by Levine and Domany
(supra) to give a figure of merit to a particular clustering
solution.)
[0055] For k=1, all clusterings are the same. This also holds
fork=n, where n is the number of data points; in this case every
point is in a different cluster. When the number of clusters
becomes large so that there is a small number of points in each
cluster, the solution becomes stable. The value off should not be
too low so that there not all clusters are represented in a
sub-sample. In the Examples provided below, the shape of the
distribution did not depend very much on the specific value off Any
value between 0.6 and 0.9 worked well.
EXAMPLES
[0056] In this section experiments on artificial and real data are
described. In all the experiments the distribution of the
correlation score is shown. Equivalent results were obtained using
other scores as well. The parameter values f=0.8 and 200 pairs of
solutions were compared for each k. A hierarchical clustering
algorithm was used, with the Ward criterion for merging clusters
(see, e.g., Jain and Dubes, supra). Similar results were obtained
using other hierarchical clustering methods (complete and average
linkage). The advantage of using hierarchical clustering methods is
that the same set of clusterings can be used for all values of
k.
Example 1
Gaussian Data
[0057] Referring first to FIGS. 1a-1c, FIG. 1a shows a mixture of
four Gaussians. The histograms of the score for varying values of k
for this data is plotted in FIG. 1b. Histograms are shown for each
value of k in the range of 2 to 7. Observations regarding the
histograms are that at k=2, there is a peak at 1, since almost all
the runs discriminated between the two upper and two lower
clusters. At k=3, most runs separated the two lower clusters, and
at k=4 most runs found the "correct" clustering as is reflected in
the distribution of scores that is still close to 1.0. At k>4
there is no longer essentially one preferred solution. There is, in
fact, a wide variety of solutions, evidenced by the widening
spectrum of the similarities. FIG. 1c plots the cumulative
distributions of the correlation score for each k, where k=2 at the
rightmost side of the plot (at peak 1), and k=7 being the leftmost
curve.
Example 2
DNA Microarray Data
[0058] The next dataset considered was the yeast DNA microarray
data of M. Eisen et al. ("Genetics cluster analysis and display of
genome-wide expression patterns", Proc. Natl. Acad. Sci. USA, 95:
14863-14868, December 1998. The data is a matrix which represents
the mRNA expression levels of n genes across a number of
experiments. Some of the genes in the data have known labels
according to a functional class. Five functional classes were
selected along with genes that belong uniquely to these five
functional classes. This yielded a dataset with 208 genes, with 79
features (experiments). Data was normalized by subtracting the mean
and dividing by the standard deviation for each column. This was
also performed for the rows, and repeated for the columns. At this
stage the first three principal components were extracted. The
distribution and histogram of scores is give in FIG. 2a fork over
the range of 2 to 7. The same behavior is observed as seen in the
mixture of four Gaussians data of FIG. 1a. Between k=5 and k=6,
there is a transition from a distribution that has a large
component near 1, to a wide distribution that is very similar to
the distribution on the random data. The clustering solution that
was obtained for k=5 agreed well with the given labels, with a
correlation score of 0.95.
Example 3
Uniformly Distributed Data
[0059] The results of a test run on data uniformly distributed on
the unit cube is shown in FIGS. 3a and 3b. The distributions are
quite similar to each other, with no change that can be interpreted
as a transformation from a stable set of solutions to unstable
solutions.
[0060] The preceding examples indicate a simple way for choosing k
as the value where there is a transition from a score distribution
that is concentrated near 1 to a wider distribution. This can be
quantified, e.g., by an increase in the area under the cumulative
distribution function or by an increase in
S(K)=P(s>0.90).
[0061] The value of 0.9 is arbitrary, but any value close to 1
would work on the set of examples considered here.
Example 4
Isolet Letter Pronunciation
[0062] The next test was run on a portion of the ISOLET (Isolated
Letter Speech Recognition) database created by Ron Cole and Mark
Fanty of the Department of Computer Science and Engineering, Oregon
Graduate Institute, Beaverton, Oreg. and available from the UCI
(University of California at Irvine) Repository of Machine Learning
Databases. (This data set was generated by having 150 subjects
speak the name of each letter of the alphabet twice, generating 52
training examples from each speaker.) This test provides an example
of what occurs with there is cluster overlap.
[0063] One thousand points of the original dataset, representing
the letters "a", "c", "d", "e", "f" were used. The columns of the
data were "whitened", then the first three principle components
were extracted. FIG. 4a is a plot of the distribution of the first
two principle components for each of the selected letters. FIG. 4b
provides histograms of the correlation score for each k in the
range of 2 to 7, and FIG. 4c is an overlay of the cumulative
distribution of the correlation score with k=2 towards the far
right side of the plot (near 1) and k=10 towards the left hand side
of the plot.
[0064] Table 1 provides a comparison of the number of clusters
identified using the inventive method against the number of
clusters obtained using other methods for selecting k. These other
methods are described by R. Tibshirani, G. Walther, and T. Hastie
in "Estimating the number of clusters in a dataset via the Gap
statistic", Tech. Report, Department of Statistics, Stanford
University, 2000; also published in JRSSB 2000, where the Gap
Statistic methods is shown to be superior to a number of other
methods.
1 TABLE 1 Jain Silhouette KL Gap Subsamp 4 Gauss 4 4 9 4 4 (FIG. 1)
Microarray 3 2 4 6 5 (FIG. 2) Isolet 2 5 5 -- 3 (FIG. 4)
[0065] In the case of clustering algorithms whose output depends on
the initial condition, e.g. k-means, a distribution of scores
exists even when considering a fixed sub-sample. In such cases the
method produces an indication of how varied the solutions are for
various values of k. We have observed that for a "good" value of k
a similar transition occurs, but is generally "smeared", since
k-means produces widely varying solutions. To address this, we
implemented a version of k-means similar to that presented by P.
Bradley and U. Fayyad in "Refining initial points for k-means
clustering" (in J. Shavlik, editor, Proc. of the 15th Inter. Conf
of Machine Learning (ICML '98), pages 91-99, San Francisco, Calif.,
1998. Morgan Kaufmann.). That method produces initial conditions
that converge to near optimal solutions, and use a fixed initial
condition for each value of k. In contrast, k-means clustering
according to the present invention produces solutions that are
highly stable with respect to sub-sampling. This good result may be
due to the global optimization criterion, which differs from the
local bottom up approach of hierarchical clustering that appears to
be less stable to sub-sampling.
Example 5
Gene Expression Data
[0066] A data set of 1600 genes with 12 time steps was utilized to
illustrate the process undergone by gene expression profiles.
[0067] First, the genes were ranked in order of "quality" to
pre-select a subset for further analysis. All the genes were ranked
according to three criteria: (1) saliency (the absolute difference
between their min and max value; the larger the better); (2)
smoothness (a coefficient assessing the smoothness of the profile
was computed; the smoother the better); and (3) reliability (the
average standard deviation of the experiment replicated for the
whole profile).
[0068] The ranks of the genes according to these three criteria
were then added to form a combined criterion according to which the
genes were ranked again. The result can be seen in FIGS. 5a and 5b,
which show gene expression temporal profiles. The ten best genes
are depicted in FIG. 5a, while the ten worst genes are shown in
FIG. 5b according to a combined criterion of saliency, smoothness,
and reliability.
[0069] The 5% top quality genes according to the above defined
combined criterion (800 genes) were selected. A kernel clustering
algorithm based on k-means was then run on random subsets of 100
genes among these 800 genes. A maximum number of clusters of ten
were used, but only five did not degenerate. The stability of the
solutions was verified by running again kernel k-means on the
resulting cluster centers. The solution was robust with respect to
increasing the number of genes (doubling, to 1600 genes), changing
the subset size (to 200 genes) and the maximum number of cluster
(to 20 genes).
[0070] FIGS. 6a and 6b illustrate the average profiles of the
clusters of the eight clustering runs and their groupings into
meta-clusters 1-5. The cluster centers in FIG. 6a were obtained
with multiple runs of k-means using random subsets of 100 genes in
the top 800 best quality genes. FIG. 6b shows the average cluster
centers for the five clusters. Only a subset of the nine possible
profiles that could occur are represented.
[0071] According to the present invention, the clustering algorithm
is based on, but is a variation of, the classical k-means
algorithm. The algorithm operates by the following routine:
[0072] Initialize: Start with a random assignment of class labels
to the patterns.
[0073] Step 1: Compute cluster centers by averaging class members
in each class.
[0074] Step 2: Re-assign patterns to the cluster with nearest
cluster center.
[0075] Iterate step 1 and 2 until the assignment of patterns to
classes remains constant.
[0076] This algorithm differs from the classical k-means in that it
uses a special metric to measure proximity in step 2. It is based
on the residuals of a fit of one profile onto another, using an
affine transformation that is a combination of translation,
scaling, and rotation (to first order). Such fit is remarkably
simple to implement with a couple of lines of Matlab.RTM. (The
MathWorks, Inc., Natick, Mass.) and is quite fast. For example, to
fit a profile (a 12 dimensional vector x2) that goes through zero
between time step 5 and 6, onto another vector x1, one can
write:
[0077] xx2=[x2,ones(12,1),[-5;-4;-3;-2;-1;1;2;3;4;5;6;7]];
[0078] w=xx2.backslash.x1;
[0079] x2fit=xx2*w;
[0080] residual=mean((x1-x2fit).{circumflex over (.alpha.)}2);
[0081] FIGS. 7a-d depict two profiles and variations on how they
can be fit one onto the other or both on their average. This
provides and illustration of curve fitting with affine
transformations. FIG. 7a shows the two original profiles P1 and P2.
FIG. 7b shows the profile P1 fitted on P2. FIG. 7c shows profile P2
fitted on P1. FIG. 7d shows both profiles fitted to their
average.
[0082] After some experimentation, the following measure of
dissimilarity was adopted:
[0083] residual0+max(residual1,residual2),
[0084] where residual0 is the squared Euclidean distance between x1
and x2, residual1 is the residual of the fit of x1 onto x2, and
residual2 is the residual of the fit of x2 onto x1. The rationale
behind this choice is that a dissimilarity simply based on the
symmetric fit to the average is, in some cases, too optimistic--it
becomes very easy with the type of affine transformations that are
being allowed to fit any curve to a straight line (but not vice
versa). Residual 0 is added to avoid allowing too large
transformations. The same desirable properties could be achieved by
solving an optimization problem under constraints to limit the
range of transformations, but this would be computationally more
expensive.
[0085] Any positive monotonic transformation of the dissimilarity
does not affect the algorithm. It should also be noted that in Step
1 of the algorithm, a simple average of the patterns is used, as
opposed to an average of the curves fitted to the cluster centers.
After iterating, there is a significant distortion of the cluster
center profiles, some of which just become straight lines.
[0086] FIGS. 8a-h illustrate the results of one run of the
algorithm on a subset of 100 gene expression profiles with a
maximum cluster number of 10, where expression intensity is plotted
with time. The clustering is obtained from the top 800 best quality
genes. FIG. 8a shows the original profiles with random class label
assignments. FIGS. 8c, 8e and 8g illustrate fitted profiles of
cluster members at successive iterations. FIGS. 8b, 8d, 8f, and 8h
show the cluster centers at successive iterations.
[0087] The present invention provides a method for model selection
in clustering. Most other approaches in the prior art are based on
either the sum squared distances within clusters, or some
combination of within-cluster distances and between-cluster
distances. Not only do these prior art methods impose the notion of
how a cluster should look (e.g., "compact"), most of the methods
performed poorly in practice. The inventive method, on the other
hand, provides a description of the stability of a clustering
method with respect to re-sampling or noise. It is believed that
this stability captures the notion of "inherent" structure that is
the goal in the validating clustering solutions.
[0088] Another advantage of present invention is that it is useful
in identifying the absence of structure in the data. This differs
from most clustering validation algorithms of the prior art, with
the exception of the gap statistic, which can only determine the
number of clusters if that number is larger than 1.
* * * * *