Spectral clustering is a pairwise clustering technique that uses the eigenvectors and eigenvalues of a normalized similarity matrix to cluster the data. While it is a popular clustering method, a limiting factor in spectral segmentation is that the similarity matrix is not usually known a priori. In this talk we will review spectral clustering and present our method for learning the similarity matrix. We introduce the idea of optimizing a cost function composed of clustering quality term, the gap, regularized by a clustering stability term, the eigengap. We will present our supervised learning methods in detail, which assumes that a training set with known clustering labels is available for learning the similarity matrix. We will also discuss how we can extend our methodology to the unsupervised and semi-supervised frameworks.
Group for Research in Decision Analysis