The need for non-linear mappings

In the last post, I recalled the problem of dimension reduction for finite subsets of L_1.  I thought I should mention the main obstacle to reducing the dimension below O(n) for n-point subsets:  It can’t be done with linear mappings.

All the general results mentioned in that post use a linear mapping.  In fact, they are all of the form:

  1. Change of density, i.e. preprocess the points/subspace so that no point has too much weight on any one coordinate.
  2. Choose a subset of the coordinates, possibly multiplying the chosen coordinates by non-negative weights.  (Note that the Newman-Rabinovich result, based on Batson, Spielman, and Srivastava, is deterministic, while in the other bounds, the sampling is random.)

(The dimension reduction here is non-linear, but only applies to special subsets of L_1, like the Brinkman-Charikar point set.)

The next theorem shows that linear dimension reduction mappings cannot do better than O(n) dimensions.

Theorem: For every 1 \leq p \leq \infty, there are arbitrarily large n-point subsets of L_p on which any linear embedding into L_2 incurs distortion at least \left(\frac{n-1}{2}\right)^{|1/p-1/2|}.

Since the identity map from \ell_1^n to \ell_2^n has distortion \sqrt{n}, this theorem immediately implies that there are n-point subsets on which any linear embedding requires \Omega(n) dimension for an {O(1)}-distortion embedding.  The p=1 case of the preceding theorem was proved by Charikar and Sahai.  A simpler proof, which extends to all p \geq 1 is given in Lemma 3.1 of a paper by myself, Mendel, and Naor.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s