• PNAS Streamlines Submission
  • Sign-up for PNAS eTOC Alerts

Robust continuous clustering

  1. Vladlen Koltunb
  1. aDepartment of Electrical and Computer Engineering, University of Maryland, College Park, MD 20740;
  2. bIntel Labs, Santa Clara, CA 95054
  1. Edited by David L. Donoho, Stanford University, Stanford, CA, and approved August 7, 2017 (received for review January 13, 2017)

Significance

Clustering is a fundamental experimental procedure in data analysis. It is used in virtually all natural and social sciences and has played a central role in biology, astronomy, psychology, medicine, and chemistry. Despite the importance and ubiquity of clustering, existing algorithms suffer from a variety of drawbacks and no universal solution has emerged. We present a clustering algorithm that reliably achieves high accuracy across domains, handles high data dimensionality, and scales to large datasets. The algorithm optimizes a smooth global objective, using efficient numerical methods. Experiments demonstrate that our method outperforms state-of-the-art clustering algorithms by significant factors in multiple domains.

Abstract

Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

Clustering is one of the fundamental experimental procedures in data analysis. It is used in virtually all natural and social sciences and has played a central role in biology, astronomy, psychology, medicine, and chemistry. Data-clustering algorithms have been developed for more than half a century (1). Significant advances in the last two decades include spectral clustering (2?4), generalizations of classic center-based methods (5, 6), mixture models (7, 8), mean shift (9), affinity propagation (10), subspace clustering (11?13), nonparametric methods (14, 15), and feature selection (16???20).

Despite these developments, no single algorithm has emerged to displace the <mml:math><mml:mi>k</mml:mi></mml:math>k-means scheme and its variants (21). This is despite the known drawbacks of such center-based methods, including sensitivity to initialization, limited effectiveness in high-dimensional spaces, and the requirement that the number of clusters be set in advance. The endurance of these methods is in part due to their simplicity and in part due to difficulties associated with some of the new techniques, such as additional hyperparameters that need to be tuned, high computational cost, and varying effectiveness across domains. Consequently, scientists who analyze large high-dimensional datasets with unknown distribution must maintain and apply multiple different clustering algorithms in the hope that one will succeed. Books have been written to guide practitioners through the landscape of data-clustering techniques (22).

We present a clustering algorithm that is fast, easy to use, and effective in high dimensions. The algorithm optimizes a clear continuous objective, using standard numerical methods that scale to massive datasets. The number of clusters need not be known in advance.

The operation of the algorithm can be understood by contrasting it with other popular clustering techniques. In center-based algorithms such as <mml:math><mml:mi>k</mml:mi></mml:math>k-means (1, 24), a small set of putative cluster centers is initialized from the data and then iteratively refined. In affinity propagation (10), data points communicate over a graph structure to elect a subset of the points as representatives. In the presented algorithm, each data point has a dedicated representative, initially located at the data point. Over the course of the algorithm, the representatives move and coalesce into easily separable clusters. The progress of the algorithm is visualized in Fig. 1.

Our formulation is based on recent convex relaxations for clustering (25, 26). However, our objective is deliberately not convex. We use redescending robust estimators that allow even heavily mixed clusters to be untangled by optimizing a single continuous objective. Despite the nonconvexity of the objective, the optimization can still be performed using standard linear least-squares solvers, which are highly efficient and scalable. Since the algorithm expresses clustering as optimization of a continuous objective based on robust estimation, we call it robust continuous clustering (RCC).

One of the characteristics of the presented formulation is that clustering is reduced to optimization of a continuous objective. This enables the integration of clustering in end-to-end feature learning pipelines. We demonstrate this by extending RCC to perform joint clustering and dimensionality reduction. The extended algorithm, called RCC-DR, learns an embedding of the data into a low-dimensional space in which it is clustered. Embedding and clustering are performed jointly, by an algorithm that optimizes a clear global objective.

We evaluate RCC and RCC-DR on a large number of datasets from a variety of domains. These include image datasets, document datasets, a dataset of sensor readings from the Space Shuttle, and a dataset of protein expression levels in mice. Experiments demonstrate that our method significantly outperforms prior state-of-the-art techniques. RCC-DR is particularly robust across datasets from different domains, outperforming the best prior algorithm by a factor of 3 in average rank.

Formulation

We consider the problem of clustering a set of <mml:math><mml:mi>n</mml:mi></mml:math>n data points. The input is denoted by <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:mi>??</mml:mi></mml:mpadded><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant="normal">…</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:math>??=[??1,??2,…,??n], where <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mpadded><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mi>D</mml:mi></mml:msup></mml:mrow></mml:math>??i∈?D. Our approach operates on a set of representatives <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:mi>??</mml:mi></mml:mpadded><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant="normal">…</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:math>??=[??1,??2,…,??n], where <mml:math><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mi>D</mml:mi></mml:msup></mml:mrow></mml:math>??i∈?D. The representatives <mml:math><mml:mi>??</mml:mi></mml:math>?? are initialized at the corresponding data points <mml:math><mml:mi>??</mml:mi></mml:math>??. The optimization operates on the representation <mml:math><mml:mi>??</mml:mi></mml:math>??, which coalesces to reveal the cluster structure latent in the data. Thus, the number of clusters need not be known in advance. The optimization of <mml:math><mml:mi>??</mml:mi></mml:math>?? is illustrated in Fig. 1.

The RCC objective has the following form:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi>??</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mfrac><mml:mi>λ</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:mpadded width="+1.7pt"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:mpadded><mml:mi>ρ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>??(??)=12∑i=1n∥??i???i∥22+λ2∑(p,q)∈Ewp,qρ(∥??p???q∥2).[1]Here <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E is the set of edges in a graph connecting the data points. The graph is constructed automatically from the data. We use mutual <mml:math><mml:mi>k</mml:mi></mml:math>k-nearest neighbors (m-kNN) connectivity (27), which is more robust than commonly used kNN graphs. The weights <mml:math><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:math>wp,q balance the contribution of each data point to the pairwise terms and <mml:math><mml:mi>λ</mml:mi></mml:math>λ balances the strength of different objective terms.

The function <mml:math><mml:mrow><mml:mi>ρ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>ρ(?) is a penalty on the regularization terms. The use of an appropriate robust penalty function <mml:math><mml:mi>ρ</mml:mi></mml:math>ρ is central to our method. Since we want representatives <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i of observations from the same latent cluster to collapse into a single point, a natural penalty would be the <mml:math><mml:msub><mml:mi mathvariant="normal">?</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:math>?0 norm (<mml:math><mml:mrow><mml:mi>ρ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mpadded width="+1.7pt"><mml:mi>y</mml:mi></mml:mpadded><mml:mo>≠</mml:mo><mml:mn>0</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow></mml:math>ρ(y)=[y≠0], where <mml:math><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:math>[?] is the Iverson bracket). However, this transforms the objective into an intractable combinatorial optimization problem. At another extreme, recent work has explored the use of convex penalties, such as the <mml:math><mml:msub><mml:mi mathvariant="normal">?</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math>?1 and <mml:math><mml:msub><mml:mi mathvariant="normal">?</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math>?2 norms (25, 26). This has the advantage of turning objective 1 into a convex optimization problem. However, convex functions—even the <mml:math><mml:msub><mml:mi mathvariant="normal">?</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math>?1 norm—have limited robustness to spurious edges in the connectivity structure <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E, because the influence of a spurious pairwise term does not diminish as representatives move apart during the optimization. Given noisy real-world data, heavy contamination of the connectivity structure by connections across different underlying clusters is inevitable. Our method uses robust estimators to automatically prune spurious intercluster connections while maintaining veridical intracluster correspondences, all within a single continuous objective.

The second term in objective 1 is related to the mean shift objective (9). The RCC objective differs in that it includes an additional data term, uses a sparse (as opposed to a fully connected) connectivity structure, and is based on robust estimation.

Our approach is based on the duality between robust estimation and line processes (28). We introduce an auxiliary variable <mml:math><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:math>lp,q for each connection <mml:math><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:math>(p,q)∈E and optimize a joint objective over the representatives <mml:math><mml:mi>??</mml:mi></mml:math>?? and the line process <mml:math><mml:mrow><mml:mi>??</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">{</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">}</mml:mo></mml:mrow></mml:mrow></mml:math>??={lp,q}:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi>??</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mfrac><mml:mi>λ</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi mathvariant="normal">Ψ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>??(??,??)=12∑i=1n∥??i???i∥22+λ2∑(p,q)∈Ewp,q(lp,q∥??p???q∥22+Ψ(lp,q)).[2]Here <mml:math><mml:mrow><mml:mi mathvariant="normal">Ψ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>Ψ(lp,q) is a penalty on ignoring a connection <mml:math><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>(p,q): <mml:math><mml:mrow><mml:mi mathvariant="normal">Ψ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>Ψ(lp,q) tends to zero when the connection is active (<mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:mpadded><mml:mo>→</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math>lp,q→1) and to one when the connection is disabled (<mml:math><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>→</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math>lp,q→0). A broad variety of robust estimators <mml:math><mml:mrow><mml:mi>ρ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>ρ(?) have corresponding penalty functions <mml:math><mml:mrow><mml:mi mathvariant="normal">Ψ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>Ψ(?) such that objectives 1 and 2 are equivalent with respect to <mml:math><mml:mi>??</mml:mi></mml:math>??: Optimizing either of the two objectives yields the same set of representatives <mml:math><mml:mi>??</mml:mi></mml:math>??. This formulation is related to iteratively reweighted least squares (IRLS) (29), but is more flexible due to the explicit variables <mml:math><mml:mi>??</mml:mi></mml:math>?? and the ability to define additional terms over these variables.

Objective 2 can be optimized by any gradient-based method. However, its form enables efficient and scalable optimization by iterative solution of linear least-squares systems. This yields a general approach that can accommodate many robust nonconvex functions <mml:math><mml:mi>ρ</mml:mi></mml:math>ρ, reduces clustering to the application of highly optimized off-the-shelf linear system solvers, and easily scales to datasets with hundreds of thousands of points in tens of thousands of dimensions. In comparison, recent work has considered a specific family of concave penalties and derived a computationally intensive majorization–minimization scheme for optimizing the objective in this special case (30). Our work provides a highly efficient general solution.

While the presented approach can accommodate many estimators in the same computationally efficient framework, our exposition and experiments use a form of the well-known Geman–McClure estimator (31),<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi>ρ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>μ</mml:mi><mml:msup><mml:mi>y</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mi>μ</mml:mi><mml:mo>+</mml:mo><mml:msup><mml:mi>y</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mfrac></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>ρ(y)=μy2μ+y2,[3]where <mml:math><mml:mi>μ</mml:mi></mml:math>μ is a scale parameter. The corresponding penalty function that makes objectives 1 and 2 equivalent with respect to <mml:math><mml:mi>??</mml:mi></mml:math>?? is<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi mathvariant="normal">Ψ</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>μ</mml:mi><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msqrt><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:msqrt><mml:mo>?</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>Ψ(lp,q)=μ(lp,q?1)2.[4]

Optimization

Objective 2 is biconvex on <mml:math><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>(??,??). When variables <mml:math><mml:mi>??</mml:mi></mml:math>?? are fixed, the individual pairwise terms decouple and the optimal value of each <mml:math><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:math>lp,q can be computed independently in closed form. When variables <mml:math><mml:mi>??</mml:mi></mml:math>?? are fixed, objective 2 turns into a linear least-squares problem. We exploit this special structure and optimize the objective by alternatingly updating the variable sets <mml:math><mml:mi>??</mml:mi></mml:math>?? and <mml:math><mml:mi>??</mml:mi></mml:math>??. As a block coordinate descent algorithm, this alternating minimization scheme provably converges.

When <mml:math><mml:mi>??</mml:mi></mml:math>?? are fixed, the optimal value of each <mml:math><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub></mml:math>lp,q is given by<mml:math display="block"><mml:mrow><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mfrac><mml:mi>μ</mml:mi><mml:mrow><mml:mi>μ</mml:mi><mml:mo>+</mml:mo><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mfrac><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>lp,q=(μμ+∥??p???q∥22)2.[5]This can be verified by substituting Eq. 5 into Eq. 2, which yields objective 1 with respect to <mml:math><mml:mi>??</mml:mi></mml:math>??.

When <mml:math><mml:mi>??</mml:mi></mml:math>?? are fixed, we can rewrite [2] in matrix form and obtain a simplified expression for solving <mml:math><mml:mi>??</mml:mi></mml:math>??,<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mpadded width="+3.4pt"><mml:mtext>arg min</mml:mtext></mml:mpadded><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:mi>??</mml:mi><mml:mo>?</mml:mo><mml:mi>??</mml:mi></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mi>F</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mfrac><mml:mi>λ</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:mi>??</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>arg min12∥?????∥F2+λ2∑(p,q)∈Ewp,qlp,q∥??(??p???q)∥22,[6]where <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i is an indicator vector with the <mml:math><mml:mrow><mml:mi>i</mml:mi><mml:mtext>th</mml:mtext></mml:mrow></mml:math>ith element set to <mml:math><mml:mn>1</mml:mn></mml:math>1. This is a linear least-squares problem that can be efficiently solved using fast and scalable solvers. The linear least-squares formulation is given by<mml:math display="block"><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mrow><mml:mrow><mml:mi>????</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mi>??</mml:mi></mml:mrow></mml:mrow><mml:mo>,</mml:mo><mml:mo>?</mml:mo><mml:mtext>where</mml:mtext></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>??</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mi>??</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi>λ</mml:mi><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>?</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>????=??,?where??=??+λ∑(p,q)∈Ewp,qlp,q(??p???q)(??p???q)?.[7]Here <mml:math><mml:mrow><mml:mi>??</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>×</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>??∈?n×n is the identity matrix. It is easy to prove that<mml:math display="block"><mml:mrow><mml:mi>??</mml:mi><mml:mo>?</mml:mo><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>?</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:mrow></mml:math>???∑(p,q)∈Ewp,qlp,q(??p???q)(??p???q)?[8]is a Laplacian matrix and hence <mml:math><mml:mi>??</mml:mi></mml:math>?? is symmetric and positive semidefinite. As with any multigrid solver, each row of <mml:math><mml:mi>??</mml:mi></mml:math>?? in Eq. 7 can be solved independently and in parallel.

The RCC algorithm is summarized in Algorithm 1: RCC. Note that all updates of <mml:math><mml:mi>??</mml:mi></mml:math>?? and <mml:math><mml:mi>??</mml:mi></mml:math>?? optimize the same continuous global objective 2.

The algorithm uses graduated nonconvexity (32). It begins with a locally convex approximation of the objective, obtained by setting <mml:math><mml:mi>μ</mml:mi></mml:math>μ such that the second derivative of the estimator is positive (<mml:math><mml:mrow><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>ρ</mml:mi><mml:mo>¨</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>y</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>></mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:math>ρ¨(y)>0) over the relevant part of the domain. Over the iterations, <mml:math><mml:mi>μ</mml:mi></mml:math>μ is automatically decreased, gradually introducing nonconvexity into the objective. Under certain assumptions, such continuation schemes are known to attain solutions that are close to the global optimum (33).

The parameter <mml:math><mml:mi>λ</mml:mi></mml:math>λ in the RCC objective 1 balances the strength of the data terms and pairwise terms. The reformulation of RCC as a linear least-squares problem enables setting <mml:math><mml:mi>λ</mml:mi></mml:math>λ automatically. Specifically, Eq. 7 suggests that the data terms and pairwise terms can be balanced by setting<mml:math display="block"><mml:mrow><mml:mrow><mml:mi>λ</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mfrac></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>λ=∥??∥2∥??∥2.[9]The value of <mml:math><mml:mi>λ</mml:mi></mml:math>λ is updated automatically according to this formula after every update of <mml:math><mml:mi>μ</mml:mi></mml:math>μ. An update involves computing only the largest eigenvalue of the Laplacian matrix <mml:math><mml:mi>??</mml:mi></mml:math>??. The spectral norm of <mml:math><mml:mi>??</mml:mi></mml:math>?? is precomputed at initialization and reused.

Additional details concerning Algorithm 1 are provided in SI Methods.

SI Methods

Initialization and Output.

We initialize the optimization with <mml:math><mml:mrow><mml:mi>??</mml:mi><mml:mo>=</mml:mo><mml:mi>??</mml:mi></mml:mrow></mml:math>??=?? and <mml:math><mml:mrow><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:math>lp,q=1. The output clusters are the weakly connected components of a graph in which a pair <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i and <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:math>??j is connected by an edge if and only if <mml:math><mml:mrow><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo><</mml:mo><mml:mi>δ</mml:mi></mml:mrow></mml:math>∥??i???j∥2<δ. The threshold <mml:math><mml:mi>δ</mml:mi></mml:math>δ is set to be the mean of the lengths of the shortest <mml:math><mml:mrow><mml:mn>1</mml:mn><mml:mo>%</mml:mo></mml:mrow></mml:math>1% of the edges in <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E.

Connectivity Structure.

The connectivity structure <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E is based on m-kNN connectivity (27), which is more robust than commonly used kNN graphs. We use <mml:math><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:math>k=10 and the cosine similarity metric for m-kNN graph construction. In an m-kNN graph, two nodes are connected by an edge if and only if each is among the <mml:math><mml:mi>k</mml:mi></mml:math>k nearest neighbors of the other. This allows statistically different clusters (e.g., different scales) to remain disconnected. A downside of this connectivity scheme is that some nodes in an m-kNN graph may be sparsely connected or even disconnected. To make sure that no data point is isolated we augment <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E with the minimum-spanning tree of the <mml:math><mml:mi>k</mml:mi></mml:math>k-nearest neighbors graph of the dataset. To balance the contribution of each node to the objective, we set<mml:math display="block"><mml:mrow><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mo largeop="true" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:msqrt><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:msub><mml:mi>N</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>wp,q=∑i=1nNinNpNq,[S1]where <mml:math><mml:msub><mml:mi>N</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>Ni is the number of edges incident to <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i in <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E.

Graduated Nonconvexity.

The penalty function in Eq. 3 is nonconvex and its shape depends on the value of the parameter <mml:math><mml:mi>μ</mml:mi></mml:math>μ. To support convergence to a good solution, we use graduated nonconvexity (32). We begin by setting <mml:math><mml:mi>μ</mml:mi></mml:math>μ such that the objective is convex over the relevant range and gradually decrease <mml:math><mml:mi>μ</mml:mi></mml:math>μ to sharpen the penalty and neutralize the influence of spurious connections in <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E. Specifically, <mml:math><mml:mi>μ</mml:mi></mml:math>μ is initially set to <mml:math><mml:mrow><mml:mi>μ</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mn>3</mml:mn><mml:msup><mml:mi>r</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mrow></mml:math>μ=3r2, where <mml:math><mml:mi>r</mml:mi></mml:math>r is the maximal edge length in <mml:math><mml:mi mathvariant="script">E</mml:mi></mml:math>E. The value of <mml:math><mml:mi>μ</mml:mi></mml:math>μ is halved every four iterations until it drops below <mml:math><mml:mrow><mml:mi>δ</mml:mi><mml:mo>/</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:math>δ/2.

Parameter Setting.

The termination conditions are set to maxiterations = 100 and <mml:math><mml:mrow><mml:mi>ε</mml:mi><mml:mo>=</mml:mo><mml:mn>0.1</mml:mn></mml:mrow></mml:math>ε=0.1.

For RCC-DR, the sparse coding parameters are set to <mml:math><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:mrow></mml:math>d=100, <mml:math><mml:mrow><mml:mi>ξ</mml:mi><mml:mo>=</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:math>ξ=8, <mml:math><mml:mrow><mml:mi>γ</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:math>γ=0.2, and <mml:math><mml:mrow><mml:mi>η</mml:mi><mml:mo>=</mml:mo><mml:mn>0.9</mml:mn></mml:mrow></mml:math>η=0.9. The dictionary is initialized using PCA components. Due to the small input dimension, we set <mml:math><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>8</mml:mn></mml:mrow></mml:math>d=8 for the Shuttle, Pendigits, and Mice Protein datasets. The parameters <mml:math><mml:msub><mml:mi>δ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math>δ2 and <mml:math><mml:msub><mml:mi>μ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math>μ2 in RCC-DR are computed using <mml:math><mml:mi>??</mml:mi></mml:math>??, by analogy to their counterparts in RCC. To set <mml:math><mml:msub><mml:mi>δ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math>δ1, we compute the distance <mml:math><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>ri of each data point <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from the mean of data <mml:math><mml:mi>??</mml:mi></mml:math>?? and set <mml:math><mml:mrow><mml:msub><mml:mi>δ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mtext>mean</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:msub><mml:mi>r</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math>δ1=mean(2ri). The initial value of <mml:math><mml:msub><mml:mi>μ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math>μ1 is set to <mml:math><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mi>ξ</mml:mi><mml:msub><mml:mi>δ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:math>μ1=ξδ1. The parameter <mml:math><mml:mi>λ</mml:mi></mml:math>λ is set automatically to<mml:math display="block"><mml:mrow><mml:mrow><mml:mi>λ</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>????</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>λ=∥????∥2∥??∥2+∥??∥2.[S2]

Implementation.

We use an approximate nearest-neighbor search to construct the connectivity structure (54) and a conjugate gradient solver for linear systems (55).

The RCC-DR Algorithm.

The RCC-DR algorithm is summarized in Algorithm S1: Joint Clustering and Dimensionality Reduction.

Joint Clustering and Dimensionality Reduction

The RCC formulation can be interpreted as learning a graph-regularized embedding <mml:math><mml:mi>??</mml:mi></mml:math>?? of the data <mml:math><mml:mi>??</mml:mi></mml:math>??. In Algorithm 1 the dimensionality of the embedding <mml:math><mml:mi>??</mml:mi></mml:math>?? is the same as the dimensionality of the data <mml:math><mml:mi>??</mml:mi></mml:math>??. However, since RCC optimizes a continuous and differentiable objective, it can be used within end-to-end feature learning pipelines. We now demonstrate this by extending RCC to perform joint clustering and dimensionality reduction. Such joint optimization has been considered in recent work (34, 35). The algorithm we develop, RCC-DR, learns a linear mapping into a reduced space in which the data are clustered. The mapping is optimized as part of the clustering objective, yielding an embedding in which the data can be clustered most effectively. RCC-DR inherits the appealing properties of RCC: Clustering and dimensionality reduction are performed jointly by optimizing a clear continuous objective, the framework supports nonconvex robust estimators that can untangle mixed clusters, and optimization is performed by efficient and scalable numerical methods.

Algorithm 1.

RCC

We begin by considering an initial formulation for the RCC-DR objective:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi>??</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:mi>??</mml:mi><mml:mo>?</mml:mo><mml:mi>????</mml:mi></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:mi>γ</mml:mi><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>∥</mml:mo></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mpadded width="+5pt"><mml:mi>ν</mml:mi></mml:mpadded><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mfrac><mml:mi>λ</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mi>ρ</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>??(??,??,??)=∥???????∥22+γ∑i=1n∥??i∥1+ν(∑i=1n∥??i???i∥22+λ2∑(p,q)∈Ewp,qρ(∥??p???q∥2)).[10]Here <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:mi>??</mml:mi></mml:mpadded><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mrow><mml:mpadded width="-1.7pt"><mml:mi>D</mml:mi></mml:mpadded><mml:mo>×</mml:mo><mml:mi>d</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:math>??∈?D×d is a dictionary, <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mpadded><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mi>d</mml:mi></mml:msup></mml:mrow></mml:math>??i∈?d is a sparse code corresponding to the <mml:math><mml:msup><mml:mi>i</mml:mi><mml:mtext>th</mml:mtext></mml:msup></mml:math>ith data sample, and <mml:math><mml:mrow><mml:mpadded width="+1.7pt"><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mpadded><mml:mo>∈</mml:mo><mml:msup><mml:mi>?</mml:mi><mml:mi>d</mml:mi></mml:msup></mml:mrow></mml:math>??i∈?d is the low-dimensional embedding of <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i. For a fixed <mml:math><mml:mi>??</mml:mi></mml:math>??, the parameter <mml:math><mml:mpadded width="+5pt"><mml:mi>ν</mml:mi></mml:mpadded></mml:math>ν balances the data term in the sparse coding objective with the clustering objective in the reduced space. This initial formulation 10 is problematic because in the beginning of the optimization the representation <mml:math><mml:mi>??</mml:mi></mml:math>?? can be noisy due to spurious intercluster connections that have not yet been disabled. This had no effect on the convergence of the original RCC objective 1, but in formulation 10 the contamination of <mml:math><mml:mi>??</mml:mi></mml:math>?? can infect the sparse coding system via <mml:math><mml:mi>??</mml:mi></mml:math>?? and corrupt the dictionary <mml:math><mml:mi>??</mml:mi></mml:math>??. For this reason, we use a different formulation that has the added benefit of eliminating the parameter <mml:math><mml:mpadded width="+5pt"><mml:mi>ν</mml:mi></mml:mpadded></mml:math>ν:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mi>??</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:mi>??</mml:mi><mml:mo>?</mml:mo><mml:mi>????</mml:mi></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup><mml:mo>+</mml:mo><mml:mrow><mml:mi>γ</mml:mi><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>∥</mml:mo></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:munderover><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msub><mml:mi>ρ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mfrac><mml:mi>λ</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>ρ</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>??(??,??,??)=∥???????∥22+γ∑i=1n∥??i∥1+∑i=1nρ1(∥??i???i∥2)+λ2∑(p,q)∈Ewp,qρ2(∥??p???q∥2).[11]Here we replaced the <mml:math><mml:msub><mml:mi mathvariant="normal">?</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math>?2 penalty on the data term in the reduced space with a robust penalty. We use the Geman–McClure estimator 3 for both <mml:math><mml:msub><mml:mi>ρ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math>ρ1 and <mml:math><mml:msub><mml:mi>ρ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math>ρ2.

To optimize objective 11, we introduce line processes <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>1</mml:mn></mml:msup></mml:math>??1 and <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:math>??2 corresponding to the data and pairwise terms in the reduced space, respectively, and optimize a joint objective over <mml:math><mml:mi>??</mml:mi></mml:math>??, <mml:math><mml:mi>??</mml:mi></mml:math>??, <mml:math><mml:mi>??</mml:mi></mml:math>??, <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>1</mml:mn></mml:msup></mml:math>??1, and <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:math>??2. The optimization is performed by block coordinate descent over these groups of variables. The line processes <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>1</mml:mn></mml:msup></mml:math>??1 and <mml:math><mml:msup><mml:mi>??</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:math>??2 can be updated in closed form as in Eq. 5. The variables <mml:math><mml:mi>??</mml:mi></mml:math>?? are updated by solving the linear system<mml:math display="block"><mml:mrow><mml:mrow><mml:msub><mml:mi>????</mml:mi><mml:mtext>dr</mml:mtext></mml:msub><mml:mo>=</mml:mo><mml:mi>????</mml:mi></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>????dr=????,[12]where<mml:math display="block"><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mtext>dr</mml:mtext></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mi>??</mml:mi><mml:mo>+</mml:mo><mml:mrow><mml:mi>λ</mml:mi><mml:mrow><mml:munder><mml:mo largeop="true" movablelimits="false" symmetric="true">∑</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>∈</mml:mo><mml:mi mathvariant="script">E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msubsup><mml:mi>l</mml:mi><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mi>??</mml:mi><mml:mi>p</mml:mi></mml:msub><mml:mo>?</mml:mo><mml:msub><mml:mi>??</mml:mi><mml:mi>q</mml:mi></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>?</mml:mo></mml:msup></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:mrow></mml:math>??dr=??+λ∑(p,q)∈Ewp,qlp,q2(??p???q)(??p???q)?[13]and <mml:math><mml:mi>??</mml:mi></mml:math>?? is a diagonal matrix with <mml:math><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mi>l</mml:mi><mml:mi>i</mml:mi><mml:mn>1</mml:mn></mml:msubsup></mml:mrow></mml:math>hi,i=li1.

The dictionary <mml:math><mml:mi>??</mml:mi></mml:math>?? and codes <mml:math><mml:mi>??</mml:mi></mml:math>?? are initialized using principal component analysis (PCA). [The K-SVD algorithm can also be used for this purpose (36).] The variables <mml:math><mml:mi>??</mml:mi></mml:math>?? are updated by accelerated proximal gradient-descent steps (37),<mml:math display="block"><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msup><mml:mi>??</mml:mi><mml:mi>t</mml:mi></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mi>ω</mml:mi><mml:mi>t</mml:mi></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msup><mml:mi>??</mml:mi><mml:mi>t</mml:mi></mml:msup><mml:mo>?</mml:mo><mml:msup><mml:mi>??</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>?</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mrow><mml:msup><mml:mi>??</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msub><mml:mi>????????</mml:mi><mml:mrow><mml:mi>τ</mml:mi><mml:mi>γ</mml:mi><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mo>.</mml:mo><mml:mo>∥</mml:mo></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover><mml:mo>?</mml:mo><mml:mi>τ</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>??</mml:mi><mml:mo>?</mml:mo></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mo>?</mml:mo><mml:mi>??</mml:mi><mml:mo>+</mml:mo><mml:mi>??</mml:mi><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover><mml:mo>?</mml:mo><mml:mi>??</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>??</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math>??ˉ=??t+ωt(??t???t?1)??t+1=????????τγ∥.∥1(??ˉ?τ(???(???+????ˉ)+(??ˉ???)??)),[14]where <mml:math><mml:mrow><mml:mi>τ</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mrow><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mrow><mml:msup><mml:mi>??</mml:mi><mml:mo>?</mml:mo></mml:msup><mml:mi>??</mml:mi></mml:mrow><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:mfrac></mml:mrow></mml:math>τ=1∥?????∥2+∥??∥2 and <mml:math><mml:mrow><mml:msup><mml:mi>ω</mml:mi><mml:mi>t</mml:mi></mml:msup><mml:mo>=</mml:mo><mml:mfrac><mml:mi>t</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>3</mml:mn></mml:mrow></mml:mfrac></mml:mrow></mml:math>ωt=tt+3. The <mml:math><mml:msub><mml:mi>????????</mml:mi><mml:mrow><mml:mi>ε</mml:mi><mml:mo>∥</mml:mo><mml:mo>.</mml:mo><mml:msub><mml:mo>∥</mml:mo><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msub></mml:math>????????ε∥.∥1 operator performs elementwise soft thresholding:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:msub><mml:mi>????????</mml:mi><mml:mrow><mml:mi>ε</mml:mi><mml:mo>∥</mml:mo><mml:mo>.</mml:mo><mml:msub><mml:mo>∥</mml:mo><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mtext>sign</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>v</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi>max</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>v</mml:mi><mml:mo>|</mml:mo></mml:mrow><mml:mo>?</mml:mo><mml:mi>ε</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>????????ε∥.∥1(v)=sign(v)max(0,|v|?ε).[15]

The variables <mml:math><mml:mi>??</mml:mi></mml:math>?? are updated using<mml:math display="block"><mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mi>????</mml:mi><mml:mo>?</mml:mo></mml:msup><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:msup><mml:mi>????</mml:mi><mml:mo>?</mml:mo></mml:msup><mml:mo>+</mml:mo><mml:mrow><mml:mi>β</mml:mi><mml:mi>??</mml:mi></mml:mrow></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>?</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mrow></mml:math>??ˉ=?????(?????+β??)?1[16]<mml:math display="block"><mml:mrow><mml:mrow><mml:msup><mml:mi>??</mml:mi><mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mrow><mml:mi>η</mml:mi><mml:msup><mml:mi>??</mml:mi><mml:mi>t</mml:mi></mml:msup></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>?</mml:mo><mml:mi>η</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">ˉ</mml:mo></mml:mover></mml:mrow></mml:mrow></mml:mrow></mml:mrow><mml:mo>,</mml:mo></mml:mrow></mml:math>??t+1=η??t+(1?η)??ˉ,[17]where <mml:math><mml:mi>β</mml:mi></mml:math>β is a small regularization value set to <mml:math><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:msup><mml:mn>10</mml:mn><mml:mrow><mml:mo>?</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup><mml:mtext>tr</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>????</mml:mi><mml:mo>?</mml:mo></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math>β=10?4tr(?????).

A precise specification of the RCC-DR algorithm is provided in Algorithm S1.

Algorithm S1.

Joint Clustering and Dimensionality Reduction

Experiments

Datasets.

We have conducted experiments on datasets from multiple domains. The dimensionality of the data in the different datasets varies from 9 to just below 50,000. Reuters-21578 is the classic benchmark for text classification, comprising 21,578 articles that appeared on the Reuters newswire in 1987. RCV1 is a more recent benchmark of 800,000 manually categorized Reuters newswire articles (38). (Due to limited scalability of some prior algorithms, we use 10,000 random samples from RCV1.) Shuttle is a dataset from NASA that contains 58,000 multivariate measurements produced by sensors in the radiator subsystem of the Space Shuttle; these measurements are known to arise from seven different conditions of the radiators. Mice Protein is a dataset that consists of the expression levels of 77 proteins measured in the cerebral cortex of eight classes of control and trisomic mice (39). The last two datasets were obtained from the University of California, Irvine, machine-learning repository (40).

MNIST is the classic dataset of 70,000 hand-written digits (41). Pendigits is another well-known dataset of hand-written digits (42). The Extended Yale Face Database B (YaleB) contains images of faces of 28 human subjects (43). The YouTube Faces Database (YTF) contains videos of faces of different subjects (44); we use all video frames from the first 40 subjects sorted in chronological order. Columbia University Image Library (COIL-100) is a classic collection of color images of 100 objects, each imaged from 72 viewpoints (45). The datasets are summarized in Table 1.

Table 1.

Datasets used in experiments

Baselines.

We compare RCC and RCC-DR to 13 baselines, which include widely known clustering algorithms as well as recent techniques that were reported to achieve state-of-the-art performance. Our baselines are <mml:math><mml:mi>k</mml:mi></mml:math>k-means++ (24), Gaussian mixture models (GMM), fuzzy clustering, mean-shift clustering (MS) (9), two variants of agglomerative clustering (AC-Complete and AC-Ward), normalized cuts (N-Cuts) (2), affinity propagation (AP) (10), Zeta <mml:math><mml:mi>l</mml:mi></mml:math>l-links (Zell) (46), spectral embedded clustering (SEC) (47), clustering using local discriminant models and global integration (LDMGI) (48), graph degree linkage (GDL) (49), and path integral clustering (PIC) (50). The parameter settings for the baselines are summarized in Table S1.

Table S1.

Parameter settings for baselines

Measures.

Normalized mutual information (NMI) has emerged as the standard measure for evaluating clustering accuracy in the machine-learning community (51). However, NMI is known to be biased in favor of fine-grained partitions. For this reason, we use adjusted mutual information (AMI), which removes this bias (52). This measure is defined as follows:<mml:math display="block"><mml:mrow><mml:mrow><mml:mrow><mml:mtext>AMI</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mrow><mml:mtext>MI</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>?</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mtext>MI</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msqrt><mml:mo>?</mml:mo><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mtext>MI</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>??</mml:mi><mml:mo>,</mml:mo><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:mfrac></mml:mrow><mml:mo>.</mml:mo></mml:mrow></mml:math>AMI(??,??^)=MI(??,??^)?E[MI(??,??^)]H(??)H(??^)?E[MI(??,??^)].[18]Here <mml:math><mml:mrow><mml:mi>H</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>H(?) is the entropy, <mml:math><mml:mrow><mml:mtext>MI</mml:mtext><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mo>?</mml:mo><mml:mo>,</mml:mo><mml:mo>?</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>MI(?,?) is the mutual information, and <mml:math><mml:mi>??</mml:mi></mml:math>?? and <mml:math><mml:mrow><mml:mover accent="true"><mml:mi>??</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow></mml:math>??^ are the two partitions being compared. For completeness, Table S2 provides an evaluation using the NMI measure.

Table S2.

Accuracy of all algorithms on all datasets, measured by NMI

Results.

Results on all datasets are reported in Table 2. In addition to accuracy on each dataset, Table 2 also reports the average rank of each algorithm across datasets. For example, if an algorithm achieves the third-highest accuracy on half of the datasets and the fourth-highest one on the other half, its average rank is 3.5. If an algorithm did not yield a result on a dataset due to its size, that dataset is not taken into account in computing the average rank of the algorithm.

Table 2.

Accuracy of all algorithms on all datasets, measured by AMI

RCC or RCC-DR achieves the highest accuracy on seven of the nine datasets. RCC-DR achieves the highest or second-highest accuracy on eight of the nine datasets and RCC achieves the highest or second-highest accuracy on five datasets. The average rank of RCC-DR and RCC is 1.6 and 2.4, respectively. The best-performing prior algorithm, LDMGI, has an average rank of 4.9, three times higher than the rank of RCC-DR. This indicates that the performance of prior algorithms is not only lower than the performance of RCC and RCC-DR, it is also inconsistent, since no prior algorithm clearly leads the others across datasets. In contrast, the low average rank of RCC and RCC-DR indicates consistently high performance across datasets.

Clustering Gene Expression Data.

We conducted an additional comprehensive evaluation on a large-scale benchmark that consists of more than 30 cancer gene expression datasets, collected for the purpose of evaluating clustering algorithms (53). The results are reported in Table S3. RCC-DR achieves the highest accuracy on eight of the datasets. Among the prior algorithms, affinity propagation achieves the highest accuracy on six of the datasets and all others on fewer. Overall, RCC-DR achieves the highest average AMI across the datasets.

Table S3.

AMI on cancer gene expression datasets

Running Time.

The execution time of RCC-DR optimization is visualized in Fig. 2. For reference, we also show the corresponding timings for affinity propagation, a well-known modern clustering algorithm (10), and LDMGI, the baseline that demonstrated the best performance across datasets (48). Fig. 2 shows the running time of each algorithm on randomly sampled subsets of the 784-dimensional MNIST dataset. We sample subsets of different sizes to evaluate runtime growth as a function of dataset size. Performance is measured on a workstation with an Intel Core i7-5960x CPU clocked at 3.0 GHz. RCC-DR clusters the whole MNIST dataset within 200 s, whereas affinity propagation takes 37 h and LDMGI takes 17 h for 40,000 points.

Fig. 2.

Runtime comparison of RCC-DR with AP and LDMGI. Runtime is evaluated as a function of dataset size, using randomly sampled subsets of different sizes from the MNIST dataset.

Visualization.

We now qualitatively analyze the output of RCC by visualization. We use the MNIST dataset for this purpose. On this dataset, RCC identifies 17 clusters. Nine of these are large clusters with more than 6,000 instances each. The remaining 8 are small clusters that encapsulate outlying data points: Seven of these contain between 2 and 11 instances, and one contains 148 instances. Fig. 3A shows 10 randomly sampled data points <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from each of the large clusters discovered by RCC. Their corresponding representatives <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i are shown in Fig. 3B. Fig. 3C shows 2 randomly sampled data points from each of the small outlying clusters. Additional visualization of RCC output on the Coil-100 dataset is shown in Fig. S3.

Fig. 3.

Visualization of RCC output on the MNIST dataset. (A) Ten randomly sampled instances <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from each large cluster discovered by RCC, one cluster per row. (B) Corresponding representatives <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from the learned representation <mml:math><mml:mi>??</mml:mi></mml:math>??. (C) Two random samples from each of the small outlying clusters discovered by RCC.

Fig. S3.

Visualization of RCC output on the Coil-100 dataset. (A) Ten randomly sampled instances <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from each of 10 clusters randomly sampled from clusters discovered by RCC, one cluster per row. (B) Corresponding representatives <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from the learned representation <mml:math><mml:mi>??</mml:mi></mml:math>??.

Fig. 4 compares the representation <mml:math><mml:mi>??</mml:mi></mml:math>?? learned by RCC to representations learned by the best-performing prior algorithms, LDMGI and N-Cuts. We use the MNIST dataset for this purpose and visualize the output of the algorithms on a subset of 5,000 randomly sampled instances from this dataset. Both of the prior algorithms construct Euclidean representations of the data, which can be visualized by dimensionality reduction. We use t-SNE (23) to visualize the representations discovered by the algorithms. As shown in Fig. 4, the representation discovered by RCC cleanly separates the different clusters by significant margins. In contrast, the prior algorithms fail to discover the structure of the data and leave some of the clusters intermixed.

Fig. 4.

(A–C) Visualization of the representations learned by RCC (A) and the best-performing prior algorithms, LDMGI (B) and N-Cuts (C). The algorithms are run on 5,000 randomly sampled instances from the MNIST dataset. The learned representations are visualized using t-SNE.

Discussion

We have presented a clustering algorithm that optimizes a continuous objective based on robust estimation. The objective is optimized using linear least-squares solvers, which scale to large high-dimensional datasets. The robust terms in the objective enable separation of entangled clusters, yielding high accuracy across datasets and domains.

The continuous form of the clustering objective allows it to be integrated into end-to-end feature learning pipelines. We have demonstrated this by extending the algorithm to perform joint clustering and dimensionality reduction.

SI Experiments

Datasets.

For Reuters-21578 we combine the train and test sets of the Modified Apte split and use only samples from categories with more than five examples. For RCV1 we consider four root categories and a random subset of 10,000 samples. For text datasets, the graph is constructed on PCA projected input. The number of PCA components is set to the number of ground-truth clusters. We compute term frequency–inverse document frequency features on the 2,000 most frequently occurring word stems.

On YaleB we consider only the frontal face images and preprocess them using gamma correction and DoG filter. For YTF we use all of the video frames from the first 40 subjects sorted in chronological order. For all image datasets we scale the pixel intensities to the range <mml:math><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mn>0,1</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:math>[0,1]. For all other datasets, we normalize the features so that <mml:math><mml:mrow><mml:msubsup><mml:mrow><mml:mo>∥</mml:mo><mml:mi>??</mml:mi><mml:mo>∥</mml:mo></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup><mml:mo>≈</mml:mo><mml:mi>D</mml:mi></mml:mrow></mml:math>∥??∥22≈D.

Baselines.

For <mml:math><mml:mi>k</mml:mi></mml:math>k-means++, GMM, Mean Shift, AC-Complete, AC-Ward, and AP we use the implementations in the scikit-learn package. For fuzzy clustering, we use the implementation provided by Matlab. For N-Cuts, Zell, SEC, LDMGI, GDL, and PIC we use the publicly available implementations published by the authors of these methods. For all algorithms that use <mml:math><mml:mi>k</mml:mi></mml:math>k-nearest neighbor graphs, we set <mml:math><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:math>k=10.

Unlike the presented algorithms, many baselines rely on multiple executions with random restarts. To maximize their reported accuracy, we use 10 random restarts for these baselines. Following common practice, for <mml:math><mml:mi>k</mml:mi></mml:math>k-means++, GMM, and LDMGI we pick the best result based on the value of the objective function at termination, whereas for fuzzy clustering, N-Cuts, Zell, SEC, GDL, and PIC we take the average across 10 random restarts.

Most of the baselines require setting one or more parameters. For a fair comparison, for each algorithm we tune one major parameter across datasets and use the default values for all other parameters. For all algorithms, the tuned value is selected based on the best average performance across all datasets. Parameter settings for the baselines are summarized in Table S1. The notation (m : s : M) indicates that parameter search is conducted in the range <mml:math><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>m</mml:mi><mml:mo>,</mml:mo><mml:mi>M</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math>(m,M) with the step <mml:math><mml:mi>s</mml:mi></mml:math>s.

Additional Accuracy Measure.

For completeness, we evaluate the accuracy of RCC, RCC-DR, and all baselines, using the NMI measure (51, 52). The results are reported in Table S2.

Results on Gene Expression Datasets.

Table S3 lists AMI results on more than 30 cancer gene expression datasets collected by de Souto et al. (53). The maximum number of samples across datasets is only <mml:math><mml:mn>248</mml:mn></mml:math>248 and for all but one dataset the dimension <mml:math><mml:mrow><mml:mi>D</mml:mi><mml:mo>></mml:mo><mml:mo>></mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:math>D>>n. Since these datasets are statistically very different from those discussed earlier, for each algorithm we retune the major parameter for the same range as given in Table S1. For both RCC and RCC-DR, we set <mml:math><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>9</mml:mn></mml:mrow></mml:math>k=9. For RCC-DR we set <mml:math><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mn>12</mml:mn></mml:mrow></mml:math>d=12 and <mml:math><mml:mrow><mml:mi>γ</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:math>γ=0.5. The author-provided code for GDL breaks on these datasets.

Robustness to Hyperparameter Settings.

The parameters of the RCC algorithm are set automatically based on the data. The RCC-DR algorithm does have a number of parameters but is largely insensitive to their settings. In the following experiment, we vary the sparse-coding parameters <mml:math><mml:mi>d</mml:mi></mml:math>d, <mml:math><mml:mi>η</mml:mi></mml:math>η, and <mml:math><mml:mi>γ</mml:mi></mml:math>γ in the ranges <mml:math><mml:mrow><mml:mi>d</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>40</mml:mn><mml:mo>:</mml:mo><mml:mn>20</mml:mn><mml:mo>:</mml:mo><mml:mn>200</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>d=(40:20:200), <mml:math><mml:mrow><mml:mi>η</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>0.55</mml:mn><mml:mo>:</mml:mo><mml:mn>0.05</mml:mn><mml:mo>:</mml:mo><mml:mn>0.95</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>η=(0.55:0.05:0.95), and <mml:math><mml:mrow><mml:mi>γ</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>0.1</mml:mn><mml:mo>:</mml:mo><mml:mn>0.1</mml:mn><mml:mo>:</mml:mo><mml:mn>0.9</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math>γ=(0.1:0.1:0.9). Fig. S1 A and B compares the sensitivity of RCC-DR to these parameters with the sensitivity of the best-performing prior algorithms to their key parameters. For each baseline, we use the default search range proposed in their respective papers. The x axis in Fig. S1 corresponds to the parameter index. As Fig. S1 demonstrates, the accuracy of RCC-DR is robust to hyperparameter settings: The relative change of RCC-DR accuracy in AMI on YaleB is 0.005, 0.008, and 0 across the range of <mml:math><mml:mi>d</mml:mi></mml:math>d, <mml:math><mml:mi>η</mml:mi></mml:math>η, and <mml:math><mml:mi>γ</mml:mi></mml:math>γ, respectively. On the other hand, the sensitivity of the baselines is much higher: The relative change in accuracy of SEC, LDMGI, N-Cuts, and GDL is 0.091, 0.049, 0.740, and 0.021, respectively. Moreover, for SEC, LDMGI, and GDL no single parameter setting works best across different datasets.

Fig. S1.

(A and B) Robustness to hyperparameter settings on the YaleB (A) and Reuters (B) datasets.

Robustness to Dataset Imbalance.

We now evaluate the robustness of different approaches to imbalance in class sizes. This experiment uses the MNIST dataset. We control the degree of imbalance by varying a parameter <mml:math><mml:mi>s</mml:mi></mml:math>s between 0.1 and 1. The class “0” is sampled with probability <mml:math><mml:mi>s</mml:mi></mml:math>s, the class “9” is sampled with probability 1, and the sampling probabilities of other classes vary linearly between <mml:math><mml:mi>s</mml:mi></mml:math>s and <mml:math><mml:mn>1</mml:mn></mml:math>1. For each value of <mml:math><mml:mi>s</mml:mi></mml:math>s, we sample 10,000 data points and evaluate the accuracy of RCC, RCC-DR, and the top-performing baselines on the resulting dataset. The results are reported in Fig. S2. The RCC and RCC-DR algorithms retain their accuracy advantage on imbalanced datasets.

Fig. S2.

Robustness to dataset imbalance.

Visualization.

Fig. S3A shows 10 randomly sampled data points <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i from each of 10 clusters randomly sampled from the clusters discovered by RCC on the Coil-100 dataset. Fig. S3B shows the corresponding representatives <mml:math><mml:msub><mml:mi>??</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:math>??i.

Learned Representation.

One way to quantitatively evaluate the success of the learned representation <mml:math><mml:mi>??</mml:mi></mml:math>?? in capturing the structure of the data is to use it as input to other clustering algorithms and to evaluate whether they are more successful on <mml:math><mml:mi>??</mml:mi></mml:math>?? than they are on the original data <mml:math><mml:mi>??</mml:mi></mml:math>??. The results of this experiment are reported in Table S4. Table S4, Left reports the performance of multiple baselines when they are given, as input, the representation <mml:math><mml:mi>??</mml:mi></mml:math>?? produced by RCC. Table S4, Right reports corresponding results when the baselines are given the representation <mml:math><mml:mi>??</mml:mi></mml:math>?? produced by RCC-DR.

Table S4.

Success of the learned representation <mml:math><mml:mi>??</mml:mi></mml:math>?? in capturing the structure of the data, evaluated by running prior clustering algorithms on <mml:math><mml:mi>??</mml:mi></mml:math>?? instead of <mml:math><mml:mi>??</mml:mi></mml:math>??

The results indicate that the performance of prior clustering algorithms improves significantly when they are run on the representations learned by RCC and RCC-DR. The accuracy improvements for <mml:math><mml:mi>k</mml:mi></mml:math>k-means++, AC-Ward, and affinity propagation are particularly notable.

Footnotes

  • ?1To whom correspondence should be addressed. Email: sohilas{at}umd.edu.

Freely available online through the PNAS open access option.

References

  1. ?
    .
  2. ?
    .
  3. ?
    .
  4. ?
    .
  5. ?
    .
  6. ?
    .
  7. ?
    .
  8. ?
    .
  9. ?
    .
  10. ?
    .
  11. ?
    .
  12. ?
    .
  13. ?
    .
  14. ?
    .
  15. ?
    .
  16. ?
    .
  17. ?
    .
  18. ?
    .
  19. ?
    .
  20. ?
    .
  21. ?
    .
  22. ?
    .
  23. ?
    .
  24. ?
    .
  25. ?
    .
  26. ?
    .
  27. ?
    .
  28. ?
    .
  29. ?
    .
  30. ?
    .
  31. ?
    .
  32. ?
    .
  33. ?
    .
  34. ?
    .
  35. ?
    .
  36. ?
    .
  37. ?
    .
  38. ?
    .
  39. ?
    .
  40. ?
    .
  41. ?
    .
  42. ?
    .
  43. ?
    .
  44. ?
    .
  45. ?
    .
  46. ?
    .
  47. ?
    .
  48. ?
    .
  49. ?
    .
  50. ?
    .
  51. ?
    .
  52. ?
    .
  53. ?
    .
  54. ?
    .
  55. ?
    .

Online Impact

                                      1. 99132880 2018-01-23
                                      2. 802899879 2018-01-23
                                      3. 295573878 2018-01-23
                                      4. 352668877 2018-01-23
                                      5. 984633876 2018-01-23
                                      6. 545928875 2018-01-23
                                      7. 976569874 2018-01-23
                                      8. 871324873 2018-01-23
                                      9. 263462872 2018-01-23
                                      10. 577161871 2018-01-23
                                      11. 255603870 2018-01-23
                                      12. 117346869 2018-01-23
                                      13. 90982868 2018-01-23
                                      14. 663415867 2018-01-23
                                      15. 793874866 2018-01-23
                                      16. 843582865 2018-01-23
                                      17. 864971864 2018-01-22
                                      18. 258841863 2018-01-22
                                      19. 957295862 2018-01-22
                                      20. 553518861 2018-01-22