Brownian Motion 2

Table of Contents

A sequel to the previous post on Brownian Motion. We introduce Wiener Measure, the Blumenthal 0-1 Law, and the Strong Markov Property.

Update (07 April 2024): added some extra details on the Monotone Class Theorem.

Wiener Measure

  • Let $C(\mathbb{R}_{+}, \mathbb{R})$ be the set of continuous functions from $\mathbb{R}_+$ to $\mathbb{R}$.

  • Define the $\sigma-$algebra $\mathcal{L}$ as the smallest $\sigma-$algebra such that for all $t\geq 0$, the projection map $t \mapsto w(t)$ is measurable.

Now let $\Omega$ be the usual sample space, we consider:

\[\begin{aligned} \Omega &\rightarrow C(\mathbb{R}_+, \mathbb{R}) \\ \omega &\mapsto (t \mapsto B_t(\omega)) \end{aligned}\]

This is a measurable map, since when composed with the projection map, we get a random variable

(Let the map above be $f$ and $f \circ g_t=B_t$, so $f^{-1}(g_t^{-1}(B))$ is measurable, as $g_t^{-1}(B)$ form the generator of sigma algebra on $C(\mathbb{R}_+, \mathbb{R})$).

The Wiener measure is then given by the probability measure:

\[W(A) = \mathbb{P}(B_. \in A)\]

for all $A \in \mathcal{L}$ and $B_.$ denotes any random continuous function $t\mapsto B_t(\omega)$.

Explicit Form of Wiener Measure

To obtain an explicit formula for this measure, we can consider the cylinder sets of the form:

\[A = \{w \in C(\mathbb{R}_+, \mathbb{R}): w(t_0) \in A_0, \dots, w(t_n) \in A_n\}\]

where $0=t_0 < \ldots < t_n$, which by our formula in the previous post, we have:

\[\begin{aligned} W(A) &= \mathbb{P}(B_{t_0}\in A_0, \dots, B_{t_n} \in A_n) \\ &= \mathbf{1}_{A_0}(0) \int_{A_1 \times \cdots A_n} \frac{\exp\left(-\sum_{i=1}^n\frac{(x_{i}-x_{i-1})^2}{2(t_i-t_{i-1})}\right) dx_1\ldots dx_n}{(2\pi)^{n/2} \sqrt{t_1(t_2-t_1)\ldots (t_n-t_{n-1})}} \end{aligned}\]

To see this actually uniquely defines a measure, we apply the Monotone Class Theorem, the details of which are left to the end of the post.

The consequence of the preceeding formula is that the Wiener measure (the law of the Brownian motion) is uniquely defined:

For distinct Brownian motions $B_1, B_2$, we have $\mathbb{P}(B_1 \in A) = \mathbb{P}(B_2 \in A)$ for some $A \in \mathcal{L}$

When the probability space is $\Omega = C(\mathbb{R}_+, \mathbb{R})$, we have the canonical process, \(X_t(w) = w(t)\) which is a Brownian motion

Continuity follows from the definition of $C(\mathbb{R}_+, \mathbb{R})$ and finite-dimensional distribution follows from the formula above.

This is called the canonical construction of Brownian motion.

Blumenthal 0-1 Law

The natural filteration of BM: $\mathcal{F}_t = \sigma(B_s: s \leq t)$

\[\mathcal{F}_{0+} = \bigcap_{t>0} \mathcal{F}_t\]

Note the difference between this and the future sigma algebra $\mathcal{F}_{\infty} = \bigcap_{n\geq 1} \sigma(X_n, X_{n+1}, \ldots)$.

Blumenthal 0-1 Law The $\sigma-$algebra $\mathcal{F}_{0+}$ is trivial, i.e. for all $A \in \mathcal{F}_{0+}$, $\mathbb{P}(A) \in \{0,1\}$.

A proof is given here, note the two applications of Monotone Class Theorem; we also include some details in the section below.

Now we give the applications.

Sample Path Properties

In a not so formal way, we can say the Brownian paths are “unstable”.

Sample Path Properties 1 Almost surely for every $\varepsilon > 0$, there is $$ \sup_{0\leq s \leq \varepsilon} B_s > 0 \qquad \text{and} \qquad \inf_{0 \leq s \leq \varepsilon} B_s < 0 $$

The superemum and infimum are defined using the intersection with $\mathbb{Q}$, since the Brownian motion is continuous, this gives the desired measurability.

To see this, define a set with $\varepsilon_p \downarrow 0$, $p\in \mathbb{N}$,

\[A = \bigcap_p \left\{\sup_{0\leq s \leq \varepsilon_p} B_s > 0\right\}\]

then $A \in \mathcal{F}_{0+}$.

note the events are decreasing and each event is in the intersection of $\mathcal{F}_t$ for $t \in [0, \varepsilon_p]$, so the intersection is in $\mathcal{F}_{0+}$.

Now simply note

\[\mathbb{P}(A)=\lim \mathbb{P}(\sup_{0\leq s \leq \varepsilon_p} B_s > 0) \geq \mathbb{P}(B\_{\varepsilon_p}>0)=1/2\]

by symmetry, so $\mathbb{P}(A)=1$. The other assertion is proved by replacing $B_s$ with $-B_s$.

A consequence of this result is the non-monotonicity of the Brownian motion.

Corollary Almost surely, the function $t \mapsto B_t$ is not monotone on any interval.

For every rational $q\in\mathbb{Q}$, replace $s=t-q$ in property 1 above and use the simple Markov property to get the desired result.

Another sample path property is that the Brownian motion blows up almost surely.

Sample Path Properties 2 Almost surely for every $a \in \mathbb{R}$, define the hitting time by $T_a = \inf\{t \geq 0: B_t = a\}$, then $T_a < \infty$ almost surely. Consequently, we have almost surely, $$ \limsup_{t \to \infty} B_t = \infty \qquad \text{and} \qquad \liminf_{t \to \infty} B_t = -\infty $$

We sketch the proof below.

  • A continuous function $f$ takes any value in $\mathbb{R}$ if and only if $\limsup f = \infty$ and $\liminf f = -\infty$.

  • We now show the probability $\mathbb{P}(\sup_{t\geq 0} B_t > M)=1$ for all $M>0$

  • Note from the previous property, we have

\[\mathbb{P}(\sup_{0\leq t \leq 1} B_t > 0) = \lim_{\delta \to 0} \mathbb{P}(\sup_{0\leq t \leq 1} B_t > \delta) =1\]
  • Using a rescaling of Brownian motion $B_t^{\delta} = 1/\delta B_{\delta^2 t}$, we have
\[\mathbb{P}(\sup_{0\leq t \leq 1/\delta^2} B_t^{\delta} > 1 ) = 1\] \[\begin{aligned} \lim_{\delta \to 0} \mathbb{P}(\sup_{0\leq t \leq 1/\delta^2} B_t^{\delta} > 1) &= \lim_{\delta \to 0} \mathbb{P}(\sup_{0\leq t \leq 1/\delta^2} B_t> 1) \\ &= \lim_{\delta \to 0} \mathbb{P}(\sup_{0\leq t \leq M^2/\delta^2} B_t > M) \\ \end{aligned}\]

This shows $\mathbb{P}(\sup_{t\geq 0} B_t > M)=1$ for all $M>0$ and for the infimum, we can use $-B_t$ instead.

Strong Markov Property

An extremely useful property of the Brownian motion is the strong Markov property, as it allows for some explicit computations.

The stopping time $T$ is a random variable in $[0, \infty]$ such that ${T \leq t} \in \mathcal{F}_t$ for all $t\geq 0$.

When we have a random variable $X_T$, we really mean a composition of maps $\omega \to (T(\omega), \omega)$ and $(t, \omega) \to X_t(\omega)$.

The value of $T=\infty$ is allowed and note that for a stopping time $\{T <t\}=\bigcup_{q\in [0,t) \cap \mathbb{Q}} \{T\leq q\} \in \mathcal{F}_t$.

For a stopping time $T$, the $\sigma-$algebra of the past before $T$ is defined as:

\[\mathcal{F}_T = \{A \in \mathcal{F}: A \cap \{T \leq t\} \in \mathcal{F}_t \text{ for all } t\geq 0\}\]

We can verify that $\mathcal{F}_T$ is indeed a $\sigma-$algebra, union and empty set are easy to check, for complements, we check for $A\in \mathcal{F}_T$, $A^c \cap \{T\leq t\} = \{T\leq t\} \setminus (A \cap \{T\leq t\}) \in \mathcal{F}_t$.

It is easy to see $T$ is measurable wrt. $\mathcal{F}_T$.

Now we claim the random variable $\mathbf{1}_{{T<\infty}} B_T$ is $\mathcal{F}_T-$measurable.

Indeed, we can write it as a pointwise limit of simple functions:

\[\mathbf{1}_{\{T<\infty\}} B_T = \lim_{n\to \infty} \sum_{k=1}^{\infty} \mathbf{1}_{\{i2^{-n} \leq T < (i+1)2^{-n}\}} B_{i2^{-n}}\]

We can check this by plugging in $\omega$ and use continuity of Brownian motion.

In addition, the random variables $\mathbf{1}_{s<T}B_{s}$ is measurable.

Now we state the strong Markov property.

Strong Markov Property Let $T$ be a stopping time and $B$ be a Brownian motion, then the random variable $$ B_t^{(T)} = \mathbf{1}_{\{T<\infty\}} (B_{T+t}-B_T) $$ is independent of $\mathcal{F}_T$ and is a Brownian motion.

Again, we only sketch the proof here.

  • It suffices to show that $\forall A \in \mathcal{F}_T$ and $F$ bounded and measurable, we have
\[\mathbb{E}[\mathbf{1}_A F(B_{t_1}^{(T)}, \ldots, B_{t_n}^{(T)})] = \mathbb{E}[\mathbf{1}_A] \mathbb{E}[F(B_{t_1}^{(T)}, \ldots, B_{t_n}^{(T)})]\]

To show this identity, we need to “break apart” the random variable $B_t^{(T)}$ using an indicator function as above, then we can use the simple Markov property to get the desired result, namely:

  • Denote $[T]_n$ as the smallest integer $k$ such that $k2^{-n} \leq T < (k+1)2^{-n}$ \(\begin{aligned} \mathbb{E}[\mathbf{1}_A F(B_{t_1}^{(T)}, \ldots, B_{t_n}^{(T)})] &= \lim_{n\to \infty} \mathbb{E}[\mathbf{1}_A F(B_{t_1}^{[T]_n}, \ldots, B_{t_p}^{[T]_n})] \\ &= \lim_{n\to \infty} \sum_{k=0}^\infty \mathbb{E}[\mathbf{1}_A \mathbf{1}_{\{k2^{-n} \leq T < (k+1)2^{-n}\}} F(B_{k2^{-n}+t_1}-B_{k2^{-n}}, \ldots, B_{k2^{-n}+t_p}-B_{k2^{-n}}] \\ \end{aligned}\) by the Dominated Convergence Theorem.

Recall $B_t\to 0$ as $t\to 0$, so we can construct the difference in the limit

Now note the product of indicator functions is measurable wrt. $\mathcal{F}_{k2^{-n}}$ which allows us to use the simple Markov property to get the desired result.

The Reflection Principle

Reflection Principle Let $B$ be a Brownian motion and $a>0$, define $S_t = \sup_{s\leq t} B_s$, then for $a\geq 0$ and $b\leq a$, we have $$ \mathbb{P}(S_t \geq a, B_t \leq b) = 2\mathbb{P}(B_t \geq 2a-b) $$ In addition, $S_t$ has the same distribution as $|B_t|$.

The trick is to note that:

\[\mathbb{P}(S_t \geq a, B_t \leq b) = \mathbb{P}(T_a \leq t, B_t \leq b) = \mathbb{P}(T_a \leq t, B_{t-T_a}^{(T_a)} \leq b-a)\]

since $B_{t-T_a}^{(T_a)} = B_t - B_{T_a}=B_t - a$ on the event $\{T_a \leq t\}$.

Now we can use the strong Markov property to get independence between $T_a$ and $B_{t-T_a}^{(T_a)}$, so replacing $B_{t-T_a}^{(T_a)}$ with a random variable with the same distribution is allowed:

\[\begin{aligned} \mathbb{P}(T_a \leq t, B_{t-T_a}^{(T_a)} \leq b-a) &= \mathbb{P}(T_a \leq t, -B_{t-T_a}^{(T_a)} \leq b-a) \\ &= \mathbb{P}(T_a \leq t, B_t \geq 2a-b) \\ &= \mathbb{P}(B_t \geq 2a-b) \\ \end{aligned}\]

as one event is contained in the other.

Corollary The distribution of $T_a$ then follows as $$ f(t) = \frac{a}{\sqrt{2\pi t^3}} \exp\left(-\frac{a^2}{2t}\right) $$ where this density is supported on $t\geq 0$.

Extension to $\mathbb{R}^d$

We first extend to arbitrary starting points.

If $Z$ is a real random variable, a process $(X_t)_{t\geq 0}$ is a real Brownian motion starting from $Z$ if $X_t = Z + B_t$ where $B$ is a standard Brownian motion.

A $d-$ dimensional Brownian motion is a process $B_t = (B_t^1, \ldots, B_t^d)$ where $B_t^i$ are independent real Brownian motions starting from $0$. Similarly, we can define a $d-$dimensional Brownian motion starting from $Z\in \mathbb{R}^d$ by $B_t = Z + (B_t^1, \ldots, B_t^d)$, where $Z$ is independent of $B_t^i$.

Note there is dependence between the components of the Brownian motion starting from $Z$.

Monotone Class Theorem

A sub-collection of sets $\mathcal{M} \subseteq \mathcal{P}(\Omega)$ is a monotone class if

  • $\Omega \in \mathcal{M}$
  • $A, B \in \mathcal{M}$ and $A \subseteq B$ implies $B \setminus A \in \mathcal{M}$
  • $A_1 \subseteq A_2 \subseteq \cdots$ with $A_i \in \mathcal{M}$ implies $\bigcup_i A_i \in \mathcal{M}$

Examples:

  • The sets $\{A \in \mathcal{F}: \mu(A) = \nu(A) \}$ form a monotone class, where $\mu, \nu$ are measures on $\mathcal{F}$.

  • The sets $\{A \in \mathcal{F}: A \ \mathrm{indepenent \ of} \ \mathcal{G} \}$ form a monotone class, where $\mathcal{G} \subset \mathcal{F}$ is a sub-$\sigma-$algebra.

We will denote the smallest monotone class containing a collection $\mathcal{C}$ by $\mathcal{M}(\mathcal{C})$, in other words,

\[\mathcal{M}(\mathcal{C}) = \bigcap_{\mathcal{C} \subseteq \mathcal{M} \text{ monotone class}} \mathcal{M}\]
Monotone Class Theorem If $\mathcal{C}$ is stable under finite intersections, then $\mathcal{M}(\mathcal{C})=\sigma(\mathcal{C})$.

We omit the proof here but give the applications mentioned in the sections above.

A more detailed discussion is in George Lowther’s blog.

A word on proving things with the Monotone Class Theorem:

  • We usually define a collection $\mathcal{L}$ that satisfies the properties we would like to show and contains the generator $\mathcal{A}$

  • We then show $\mathcal{L}$ is a monotone class so that $\sigma(\mathcal{A})=\mathcal{M}(\mathcal{A}) \subseteq \mathcal{L}$

Definition of Wiener Measure

Recall that we gave an explicit measure on the cylinder sets $A$ and we would like to show this concurs with the Wiener measure.

Denote this explicit measure as $\mu$ and the desired measure as $\nu$. Also, let $\mathcal{A}$ be the collection of cylinder sets, which is stable under finite intersection. Using the first example,

\[\mathcal{L} = \{A \in \mathcal{P}(C(\mathbb{R}_+, \mathbb{R})): \mu(A) = \nu(A)\} \supseteq \mathcal{A}\]

is a monotone class and thus $\sigma(\mathcal{A})=\mathcal{M}(\mathcal{A}) \subseteq \mathcal{L}$.

Independence in Blumenthal 0-1 Law

Recall that we showed using continuity

\[\mathbb{E}[\mathbf{1}_A F(B_{t_1}, \ldots, B_{t_n})] = \lim_{\varepsilon\to 0} \mathbb{E}[\mathbf{1}_A F(B_{t_1}-B_{\varepsilon}, \ldots, B_{t_n}-B_{\varepsilon})]\]

then by simple Markov property, we have indepedence of this finite collection of random variables with $A \in \mathcal{F}_{0+}$. Now we can use the second example to get the desired result.

Namely, let $\mathcal{A}$ be the collection of algebras generated by the finite collection of random variables $B_{t_1}, \ldots, B_{t_n}$, which is stable under finite intersection, we have

\[\mathcal{L} = \{A \in \mathcal{F}_{0+}: A \ \mathrm{independent \ of} \ \mathcal{A}\} \supseteq \mathcal{A}\]

which is a monotone class and thus $\sigma(\mathcal{A})=\mathcal{M}(\mathcal{A}) \subseteq \mathcal{L}$.

Finally, we present another version of the Monotone Class Theorem that deals with functions.

Functional Monotone Class Theorem

The following statement is taken from planetmath.org.

Let $E$ be a set and $\mathcal{K} \subseteq B(E)$ be a collection of functions from $E$ to $\mathbb{R}$ that is closed under multiplication:

\[f, g \in \mathcal{K} \implies fg \in \mathcal{K}\]

where $B(E)$ is the set of bounded functions from $E$ to $\mathbb{R}$.

Let $\mathcal{A}$ be the $\sigma$-algebra on $E$ generated by $\mathcal{K}$.

Suppose $\mathcal{H}$ is a vector space of functions $f: E \to \mathbb{R}$, such that:

  • $\mathcal{K} \subseteq \mathcal{H}$

  • Any constant function is in $\mathcal{H}$

  • For non-negative $f_n$, $f_n \in \mathcal{H}$ and $f_n \uparrow f$ pointwise implies $f \in \mathcal{H}$

Then $\mathcal{H}$ contains all bounded $\mathcal{A}$-measurable functions to $\mathbb{R}$.

Here, we say $\mathcal{K}$ generates $\mathcal{A}$ if $\mathcal{A}$ is the smallest $\sigma-$algebra containing the sets $f^{-1}(B)$ for all $f\in \mathcal{K}$ and $B\in \mathcal{B}(\mathbb{R})$.

Equivalent definition of Markov Process

We can use the Functional Monotone Class Theorem to show that the Markov property of finite dimensional distributions is equivalent to the definition of a Markov process.

The following definition is a modified version of the one in Revuz and Yor’s Continuous Martingales and Brownian Motion.

Instead of just positive functions, we consider bounded functions as in Le Gall’s Brownian Motion, Martingales, and Stochastic Calculus.

Markov Process Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_t, Q)$ be a filtered probability space; an adapted stochastic process with transition function $P_{s,t}$ is a Markov process if $$ \mathbb{E}[f(X_{t})|\mathcal{F}_s] = P_{s,t} f(X_s) \qquad Q-\text{a.s.} $$ for all pairs $0\leq s \leq t$ and $f\in B(E)$, where $E$ is the state space.

This definition is equivalent to the following (omitting the a.s.):

Markov Property For $f_{t_0}, \ldots, f_{t_k} \in B(E)$, we have $$ \begin{aligned} \mathbb{E}[\prod_{i=1}^k f_{t_i}(X_{t_i})|\mathcal{F}_s] &= \int \nu(d x_{0})f_{0}(x_{0})\int P_{0,x_{1}}(x_{0},d x_{1})f_{1}(x_{1})\\ &\ldots \int P_{t_{k-1},t_{k}}(x_{k-1},d x_{k})f_{k}(x_{k}) \end{aligned} $$ for all $0=t_0 \leq \ldots \leq t_k$.

To see this, we first recall that a characterization of conditional expectation:

For $\sigma$-algebra $\mathcal{G} \subseteq \mathcal{F}$, a random variable $Z\in \mathcal{G}$ is a conditional expectation $\mathbb{E}[X \mid \mathcal{G}]$ if and only if:

\[\mathbb{E}[ZY] = \mathbb{E}[XY] \qquad \forall Y \in L^2(\Omega, \mathcal{G}, \mathbb{P})\]

For a proof, see this set of notes and this post on Math Stack Exchange.

So it suffices to show that for all $Y \in L^2(\Omega, \mathcal{F}_s, Q)$, we have

\[\mathbb{E}[f(X_t)Y] = \mathbb{E}[P_{s,t}f(X_s)Y]\]

for all $f\in B(E)$.

We will use the Functional Monotone Class Theorem to construct a collection of random variables that satisfy the desired property. First, we construct $\mathcal{K}$ to be the set of product of indicator functions of the form:

\[\mathcal{K} = \{\prod_{i=1}^k \mathbf{1}(X_{t_i} \in A_i): A_i \in \mathcal{F}_{t_i}, \, k\in \mathbb{N}, \, 0=t_0 \leq \ldots \leq t_k =s < t\}\]

It is easy to see that $\mathcal{K}$ is stable under multiplication. Note the indicator functions are defined on $\Omega$ and they generate the $\sigma-$algebra $\mathcal{F}_s$.

Then we let $\mathcal{H}$ be the set of bounded random variables of the form $\prod_{i=1}^k f_{t_i}(X_{t_i})$ as in the definition of Markov property. It is easy to see that $\mathcal{K} \subseteq \mathcal{H}$ and contains constant functions:

\[\mathcal{H} = \left\{\prod_{i=1}^k f_{t_i}(X_{t_i}): \mathbb{E}\left[\prod_{i=0}^{k}f_{i}\left(X_{t_{i}}\right)g(X_{s})\right]=\mathbb{E}\left[\prod_{i=0}^{k}f_{i}\left(X_{t_{i}}\right)P_{t,s}g(X_{t})\right], \, k\in \mathbb{N}, \, \\ 0=t_0 \leq \ldots \leq t_k =s < t\right\}\]

so by the Monotone Convergence Theorem, the third property of $\mathcal{H}$ is satisfied.

Thus, by the Functional Monotone Class Theorem, we have that $\mathcal{H}$ contains all bounded $\mathcal{F}_s-$measurable functions.

Brief Recap of Discrete Time Martingales 1 Brownian Motion 1