Entropy optimality on path space

After Boaz posted on the mother of all inequalities, it seemed about the right time to get around to the next series of posts on entropy optimality. The approach is the same as before, but now we consider entropy optimality on a path space. After finding an appropriate entropy-maximizer, the Brascamp-Lieb inequality will admit a gorgeous one-line proof. Our argument is taken from the beautiful paper of Lehec.

For simplicity, we start first with an entropy optimization on a discrete path space. Then we move on to Brownian motion.

1.1 Entropy optimality on discrete path spaces

Consider a finite state space {\Omega} and a transition kernel {p : \Omega \times \Omega \rightarrow [0,1]} . Also fix some time {T \geq 0} .

Let {\mathcal P_T} denote the space of all paths {\gamma : \{0,1,\ldots,T\} \rightarrow \Omega} . There is a natural measure {\mu_{\mathcal P}} on {\mathcal P_T} coming from the transition kernel:

\displaystyle  \mu_{\mathcal P}(\gamma) = \prod_{t=0}^{T-1} p\left(\gamma(t), \gamma(t+1)\right)\,.

Now suppose we are given a starting point {x_0 \in \Omega} , and a target distribution specified by a function {f : \Omega \rightarrow {\mathbb R}_+} scaled so that {\mathop{\mathbb E}[f(X_T) \mid X_0 = x_0]=1} . If we let {\nu_T} denote the law of {X_T \mid X_0 = x_0} , then this simply says that {f} is a density with respect to {\nu_T} . One should think about {\nu_T} as the natural law at time {T} (given {X_0=x_0} ), and {f \nu_T} describes a perturbation of this law.

Let us finally define the set {\mathcal M_T(f; x_0)} of all measures {\mu} on {\mathcal P_T} that start at {x_0} and end at {f \nu_T} , i.e. those measures satisfying

\displaystyle  \mu\left(\{\gamma : \gamma(0)=x_0\}\right) = 1\,,

and for every {x \in \Omega} ,

\displaystyle  f(x) \nu_T(x) = \sum_{\gamma \in \mathcal P : \gamma(T)=x} \mu(\gamma)\,.

Now we can consider the entropy optimization problem:

\displaystyle  \min \left\{ D(\mu \,\|\, \mu_{\mathcal P}) : \mu \in \mathcal M_T(f;x_0) \right\}\,. \ \ \ \ \ (1)

One should verify that, like many times before, we are minimizing the relative entropy over a polytope.

One can think of the optimization as simply computing the most likely way for a mass of particles sitting at {x_0} to end up in the distribution {f \nu_T} at time {T} .

The optimal solution {\mu^*} exists and is unique. Moreover, we can describe it explicitly: {\mu^*} is given by a time-inhomogeneous Markov chain. For {0 \leq t \leq T-1} , this chain has transition kernel

\displaystyle  q_t(x,y) = p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)}\,, \ \ \ \ \ (2)

where {H_t} is the heat semigroup of our chain {\{X_t\}} , i.e.

\displaystyle  H_t f(x) = \mathop{\mathbb E}[f(X_t) \mid X_0 = x]\,.

Let {\{W_t\}} denote the time-inhomogeneous chain with transition kernels {\{q_t\}} and {W_0=x_0} and let {\mu} denote the law of the random path {\{W_0, \ldots, W_T\}} . We will now verify that {\mu} is the optimal solution to (1).

We first need to confirm that {\mu \in \mathcal M_T(f;x_0)} , i.e. that {W_T} has law {f \nu_T} . To this end, we will verify inductively that {W_t} has law {(H_{T-t} f)\cdot \nu_t} . For {t=0} , this follows by definition. For the inductive step:

\displaystyle  \begin{array}{lll}  \displaystyle\mathop{\mathbb P}[W_{t+1}=y] &= \sum_{x \in \Omega} \Pr[W_t=x] \cdot p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)} \\ \displaystyle&= \sum_{x \in \Omega} H_{T-t} f(x) \nu_t(x) p(x,y) \frac{H_{T-t-1} f(y)}{H_{T-t} f(x)} \\ \displaystyle&= \sum_{x \in \Omega} \nu_t(x) p(x,y) H_{T-t-1}f(y) \\ \displaystyle & = H_{T-t-1} f(y) \nu_{t+1}(y)\,. \end{array}

We have confirmed that {\mu \in \mathcal M_T(f;x_0)} . Let us now verify its optimality by writing

\displaystyle  D(f \nu_T \,\|\,\nu_T) = \mathop{\mathbb E}_{\nu_T} [f \log f] = \mathop{\mathbb E}[\log f(W_T)]\,,

where the final equality uses the fact we just proved: {W_T} has law {f \nu_T} . Continuing, we have

\displaystyle  \mathop{\mathbb E}[\log f(W_T)] = \sum_{t=0}^{T-1} \mathop{\mathbb E}\left[\log \frac{H_{T-t-1} f(W_{t+1})}{H_{T-t} f(W_t)}\right] = \sum_{t=0}^{T-1} \mathop{\mathbb E} \left[D(q_t(W_t, \cdot) \,\|\, p(W_t,\cdot))\right]\,,

where the final inequality uses the definition of {q_t} in (2). The latter quantity is precisely {D(\mu \,\|\, \mu_{\mathcal P})} by the chain rule for relative entropy.

Exercise: One should check that if {\{A_t\}} and {\{B_t\}} are two time-inhomogeneous Markov chains on {\Omega} with respective transition kernels {a_t} and {b_t} then indeed the chain rule for relative entropy yields

\displaystyle  D(\{A_0, \ldots, A_T\} \,\|\, \{B_0, \ldots, B_T\}) = \sum_{t=0}^{T-1} \mathop{\mathbb E}\left[D\left(a_t(A_t, \cdot)\,\|\,b_t(A_t,\cdot)\right)\right]\,. \ \ \ \ \ (3)

We conclude that

\displaystyle  D(f \nu_T \,\|\, \nu_T) = D(\mu \,\|\,\mu_{\mathcal P})\,,

and from this one immediately concludes that {\mu=\mu^*} . Indeed, for any measure {\mu' \in \mathcal M_T(f;x_0)} , we must have {D(\mu' \,\|\,\mu_{\mathcal P}) \geq D(f \nu_T \,\|\,\nu_T)} . This follows because {f \nu_T} is the law of the endpoint of a path drawn from {\mu'} and {\nu_T} is the law of the endpoint of a path drawn from {\mu} . The relative entropy between the endpoints is certainly less than along the entire path. (This intuitive fact can again be proved via the chain rule for relative entropy by conditioning on the endpoint of the path.)

1.2. The Brownian version

Let us now do the same thing for processes driven by Brownian motion in {\mathbb R^n} . Let {\{B_t : t \in [0,1]\}} be a Brownian motion with {B_0=0} . Let {\gamma_n} be the standard Gaussian measure and recall that {B_1} has law {\gamma_n} .

We recall that if we have two measures {\mu} and {\nu} on {\mathbb R^n} such that {\nu} is absolutely continuous with respect to {\mu} , we define the relative entropy

\displaystyle D(\nu\,\|\,\mu) = \int d\nu \log \frac{d\nu}{d\mu}

Our “path space” will consist of drift processes {\{W_t : t \in [0,1]\}} of the form

\displaystyle  W_t = B_t + \int_0^t u_s\,ds\,, \ \ \ \ \ (4)

where {\{u_s\}} denotes the drift. We require that {\{u_s\}} is progressively measurable, i.e. that the law of {u_s} is determined by the past up to time {s} , and that {\mathop{\mathbb E} \int_0^1 \|u_s\|^2 \,ds < \infty} . Note that we can write such a process in differential notation as

\displaystyle  dW_t = dB_t + u_t\,dt\,,

with {W_0=0} .

Fix a smooth density {f : \mathbb R^n \rightarrow {\mathbb R}_+} with {\int f \,d\gamma_n =1} . In analogy with the discrete setting, let us use {\mathcal M(f)} to denote the set of processes {\{W_t\}} that can be realized in the form (4) and such that {W_0 = 0} and {W_1} has law {f d\gamma_n} .

Let us also use the shorthand {W_{[0,1]} = \{W_t : t\in [0,1]\}} to represent the entire path of the process. Again, we will consider the entropy optimization problem:

\displaystyle  \min \left\{ \vphantom{\bigoplus} D\left(W_{[0,1]} \,\|\, B_{[0,1]}\right) : W_{[0,1]} \in \mathcal M(f) \right\}\,. \ \ \ \ \ (5)

As in the discrete setting, this problem has a unique optimal solution (in the sense of stochastic processes). Here is the main result.

Theorem 1 (Föllmer) If {\{ W_t = B_t + \int_0^t u_s\,ds : t \in [0,1]\}} is the optimal solution to (5), then

\displaystyle  D\left(W_{[0,1]}\,\|\,B_{[0,1]}\right) = D(W_1 \,\|\, B_1) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,.

Just as for the discrete case, one should think of this as asserting that the optimal process only uses as much entropy as is needed for the difference in laws at the endpoint. The RHS should be thought of as an integral over the expected relative entropy generated at time {t} (just as in the chain rule expression (3)).

The reason for the quadratic term is the usual relative entropy approximation for infinitesimal perturbations. For instance, consider the relative entropy between a binary random variable with expected value {\tfrac12 (1-\varepsilon)} and a binary random variable with expected value {\tfrac12} :

\displaystyle  \frac12(1-\varepsilon) \log (1-\varepsilon) + \frac12 (1+\varepsilon) \log (1+\varepsilon) \approx \frac12 \varepsilon^2\,.

I am going to delay the proof of Theorem 1 to the next post because doing it in an elementary way will require some discussion of Ito calculus. For now, let us prove the following.

Lemma 2 For any process {W_{[0,1]} \in \mathcal M(f)} given by a drift {\{u_t : t\in[0,1]\}} , it holds that

\displaystyle  D(W_1 \,\|\, B_1) \leq D(W_{[0,1]} \,\|\, B_{[0,1]}) =\frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,.

Proof: The proof will be somewhat informal. It can be done easily using Girsanov’s theorem, but we try to keep the presentation here elementary and in correspondence with the discrete version above.

Let us first use the chain rule for relative entropy to calculate

\displaystyle  D\left(W_{[0,1]} \,\|\,B_{[0,1]}\right) = \int_0^1 \mathop{\mathbb E}\left[D( dW_t \,\|\, dB_t)\right] = \int_0^1 \mathop{\mathbb E}\left[D(dB_t + u_t\,dt \,\|\,dB_t)\right]\,. \ \ \ \ \ (6)

Note that {dB_t} has the law of a standard {n} -dimensional of covariance {dt \cdot I} .

If {Z} is an {n} -dimensional Gaussian with covariance {\sigma^2 \cdot I} and {u \in \mathbb R^n} , then

\displaystyle  \begin{array}{lll}  D(Z + u \,\|\, Z) &= \mathop{\mathbb E}\left[\log \frac{e^{-\|Z\|^2/2\sigma^2}}{e^{-\|u-Z\|^2/2\sigma^2}}\right] \\ &= \mathop{\mathbb E}\left[\frac{\|u\|^2}{2\sigma^2} + \frac{\langle u,Z\rangle}{\sigma^2}\right] \\ &= \frac{\|u\|^2}{2\sigma^2}\,. \end{array}

Therefore:

\displaystyle  D(dB_t + u_t\,dt \,\|\,dB_t) = \mathop{\mathbb E} \left[\frac{\|u_t\|^2 dt^2}{2 dt}\mid \mathcal F_t\right] =\frac12 \mathop{\mathbb E}\left[\|u_t\|^2\,dt \mid \mathcal F_t\right]\,,

where the latter expectation is understood to be conditioned on the past \mathcal F_t  up to time {t} .

In particular, plugging this into (6), we have

\displaystyle  D\left(W_{[0,1]} \,\|\,B_{[0,1]}\right) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|u_t\|^2\,dt\,. \ \ \ \ \ (7)

\Box

1.3. Brascamp-Lieb

The proof is taken directly from Lehec. We will use the entropic formulation of Brascamp-Lieb due to Carlen and Cordero-Erausquin.

Let {E} be a Euclidean space with subspaces {E_1, E_2, \ldots, E_m} . Let {P_i} denote the orthogonal projection onto {E_i} . Now suppose that for positive numbers {c_1, c_2, \ldots, c_m > 0} , we have

\displaystyle  \sum_{i=1}^m c_i P_i = \mathrm{id}_E\,. \ \ \ \ \ (8)

By (8), we have for all {x \in E} :

\displaystyle \|x\|^2 = \left\langle x,\sum_{i=1}^m c_i P_i x\right\rangle = \sum_{i=1}^m c_i\|P_i x\|^2\,.

The latter equality uses the fact that each {P_i} is an orthogonal projection.

Let {Z} denote a standard Gaussian on {E} , and let {Z_i} denote a standard Gaussian on {E_i} for each {i=1,2,\ldots, m} .

Theorem 3 (Carlen & Cordero-Erausquin version of Brascamp-Lieb) For any random vector {X \in E} , it holds that

\displaystyle  D(X \,\|\, Z) \geq \sum_{i=1}^m c_i D(P_i X \,\|\, Z_i)\,.

Proof: Let {\{W_t : t \in [0,1]\}} with {dW_t = dB_t + v_t\,dt} denote the entropy-optimal drift process such that {W_1} has the law of {X} . Then by Theorem 1,

\displaystyle  D(X\,\|\,Z) = \frac12 \int_0^1 \mathop{\mathbb E}\,\|v_t\|^2\,dt = \frac12 \int_0^1 \sum_{i=1}^m c_i \mathop{\mathbb E}\,\|P_i v_t\|^2\,dt \geq \sum_{i=1}^m c_i D(P_i X \,\|\, Z_i)\,,

where the latter inequality uses Lemma 2 and the fact that {P_i W_1} has law {P_i X} . \Box

5 thoughts on “Entropy optimality on path space

  1. Girsanov’s theorem in this setting gives the change of measure under which {W_{[0,1]} \in \mathcal M(f)} becomes a Brownian motion. Let {P} be the law of {W_{[0,1]}} , and define {Q} by

    \displaystyle  \frac{dQ}{dP} = \exp\left(-\int_0^1 \langle u_t, dB_t\rangle - \frac12 \int_0^1 \|u_t\|^2\,dt\right)\,.

    Then by Girsanov’s theorem, under the measure {Q} , the process {W_{[0,1]}} is a Brownian motion.

    To apply Girsanov, one should verify Novikov’s condition that {\mathop{\mathbb E} \exp\left(\frac12 \int_0^1 \|v_t\|^2\,dt\right) < \infty} . This could actually be false for our process (since {\mathop{\mathbb E} \|v_t\|^2 \rightarrow \infty} as {t \rightarrow 1} is a possibility). But when the density {f} has finite relative entropy, one can handle this by truncation and then taking a limit.

    Now the change of measure allows us to calculate the relative entropy

    \displaystyle  D(W_{[0,1]} \,\|\, B_{[0,1]}) = \int dP \log \frac{dP}{dQ} = \mathop{\mathbb E}\left[\int_0^1 \langle u_t, dB_t\rangle + \frac12 \int_0^1 \|u_t\|^2\,dt\right] = \frac12 \int_0^1 \mathop{\mathbb E} \,\|u_t\|^2\,dt

    The assertion that {D(W_1 \,\|\, B_1) \leq D(W_{[0,1]} \,\|\, B_{[0,1]})} follows from the chain rule for relative entropy:

    \displaystyle  D(W_{[0,1]} \,\|\, B_{[0,1]}) = D(W_1 \,\|\, B_1) + \int D(W_{[0,1]} \,\|\, B_{[0,1]} \mid W_1=B_1=x) f(x) d\gamma_n(x)\,,

    where the second term is non-negative.

  2. This is interesting stuff!

    I’m confused about what makes the h(u_t) term appear. Can you elaborate a bit on that?

    Also, I think there are some small typos in the last two $$ equations; in both cases what should be P_iX is changed into something a little different.

  3. Hi Aram,
    You are right that there is no additional h(.) term. I even removed it before your comment after I wrote the formal proof in the preceding comment using Girsanov. I will fix the typos involving P_i X. Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s