Let $x(t), x(0)=0$ be a Wiener process with the parameters $a$ and $\sigma.$ Prove that its mean equals $a \cdot t$ and its covariance $R(t,s)$ is equal $R(t,s)=\sigma \min(t,s)$
$\endgroup$ 22 Answers
$\begingroup$I will give a very thorough and practical approach to prove the mean, variance and covariance of a simple Wiener process. If we start at some value $W_0$ we can think of the process for $W_t$ as it being completely stochastic, completely deterministic or a mix. When it is stochastic we see each addition in time from $W_{t-1}$ to $W_{t}$ as being caused by an error term $\varepsilon_t \sim N(0,\sigma_\varepsilon ^2)$ that is standard normal independent with mean zero and some variance $\sigma_\varepsilon ^2$. In that sence, a stochastic process is a sum of random independent error terms.
a. Completely determistic: $W_t =W_0 +\alpha t$
b. Completely stochastic: $W_t =W_0 +\sum\limits_{i=1}^t \varepsilon_i $
c. A mix of deterministic and stochastic: $W_t =W_0 +\alpha t +\sum\limits_{i=1}^t \varepsilon_i $
Now we will analyze form b. (Completely stochastic) and assume that $W_0=0$ and $\varepsilon_t \sim N(0,1)$. The results that follow are generalizable to all cases. Hence we have:
$W_t =\sum\limits_{i=1}^t \varepsilon_i$, or equivalently at some point in time where $s<t$ we have
$ W_t =W_s +\sum\limits_{i=t-s}^t \varepsilon_i $, or in terms of differences where $s<t$ we have that
$W_t-W_s = \left(W_s +\sum\limits_{i=t-s}^t \varepsilon_i \right)-W_s=\sum\limits_{i=t-s}^t \varepsilon_i$
Now we analyze the mean, variance and covariance of both $W_t$ and $W_t -W_s$. For this we need to remember the property of independence between the error terms. This property translates to the fact that $\mathbb{E}[\varepsilon_s \varepsilon_t]=0$ for $s\neq t$. Also we make use of linearity of expectation, e.g. that $\mathbb{E}[\varepsilon_{t-2} +\varepsilon_{t-1}+\varepsilon_{t}]=\mathbb{E}[\varepsilon_{t-2}]+\mathbb{E}[\varepsilon_{t-1}]+\mathbb{E}[\varepsilon_{t}]$, which holds even if the terms are not independent. Also, note that we make use of the simple variance rule $Var(W_t)= \mathbb{E}[(W_t)^2]-(\mathbb{E}[W_t])^2$. Finally remember that if $\varepsilon_t \sim N(0,1)$ then $\mathbb{E}[\varepsilon_t]=0$ and $\mathbb{E}[\varepsilon_t \varepsilon_t]=1$ So lets start:
$\mathbb{E}[W_t]=\mathbb{E}\left[\sum\limits_{i=1}^t \varepsilon_i\right]=\mathbb{E}[\varepsilon_1+\varepsilon_2+\varepsilon_3+...+\varepsilon_t]=\mathbb{E}[\varepsilon_{1}]+\mathbb{E}[\varepsilon_{2}]+\mathbb{E}[\varepsilon_{2}]+...+\mathbb{E}[\varepsilon_{t}]=0$
$Var(W_t)=\mathbb{E}[(W_t-\mathbb{E}[W_t])^2]=\mathbb{E}[(W_t)^2]-(\mathbb{E}[W_t])^2$. Note that $(\mathbb{E}[W_t])^2=0$ from the previous proof. Hence we have that
$Var(W_t)=\mathbb{E}[(W_t)^2]=\mathbb{E}\left[\left(\sum\limits_{i=1}^t \varepsilon_i\right)^2\right]=\mathbb{E}[(\varepsilon_1+\varepsilon_2+\varepsilon_3+...+\varepsilon_t)^2]=\mathbb{E}[\varepsilon_1 \varepsilon_1+\varepsilon_2\varepsilon_2+\varepsilon_3\varepsilon_3+$
$ 2\varepsilon_1\varepsilon_2+2\varepsilon_1\varepsilon_3+2\varepsilon_2\varepsilon_3+...+...+\varepsilon_t \varepsilon_t]$ since all cross multiplications for which $t\neq s$ equal zero in expectation, we have
$Var(W_t)=\mathbb{E}[\varepsilon_1 \varepsilon_1+\varepsilon_2 \varepsilon_2+\varepsilon_3 \varepsilon_3+...+\varepsilon_t \varepsilon_t]=\mathbb{E}[\varepsilon_1 \varepsilon_1]+\mathbb{E}[\varepsilon_2 \varepsilon_2]+\mathbb{E}[\varepsilon_3 \varepsilon_3]+...+\mathbb{E}[\varepsilon_t \varepsilon_t]=\sigma_\varepsilon ^2+\sigma_\varepsilon ^2+\sigma_\varepsilon ^2+...+\sigma_\varepsilon ^2=t \sigma_\varepsilon ^2=t$$Cov(W_t, W_s)=\mathbb{E}[(W_t-\mathbb{E}[W_t])(W_s-\mathbb{E}[W_s])]=\mathbb{E}[W_t W_s]-\mathbb{E}[W_t]\mathbb{E}[W_s]$. Since $\mathbb{E}[W_t]\mathbb{E}[W_s]=0$ from the proof above we have that
$Cov(W_t, W_s)=\mathbb{E}[W_t W_s]=\mathbb{E}\left[\left(\sum\limits_{i=1}^t \varepsilon_i \right) \left(\sum\limits_{i=1}^s \varepsilon_i \right) \right] = \mathbb{E}[(\varepsilon_1 +\varepsilon_2+...+\varepsilon_s+ \varepsilon_{s+1}+ \varepsilon_{s+2}+...+\varepsilon_{t})(\varepsilon_1 +\varepsilon_2+...+\varepsilon_s)]$
Since errors $\varepsilon_t, \varepsilon_s$ for $s\neq t$ are independent this simply reduces to
$Cov(W_t, W_s)=\mathbb{E}[\varepsilon_1 \varepsilon_1+\varepsilon_2 \varepsilon_2+...+\varepsilon_s \varepsilon_s]=\mathbb{E}[\varepsilon_1 \varepsilon_1]+\mathbb{E}[\varepsilon_2 \varepsilon_2]+...+\mathbb{E}[\varepsilon_s \varepsilon_s]=\sigma_\varepsilon ^2+\sigma_\varepsilon ^2+...+\sigma_\varepsilon ^2=\sigma_\varepsilon ^2s=s$
Now lets move to an analysis of $W_t - W_s$. Intuitively you should feel that $\mathbb{E}[W_t - W_{t-1}]=0$ because the difference is simply an error term with mean zero. If you would have a deterministic trend in your process the difference would be $\alpha t -\alpha (t-1)=\alpha t -\alpha t +\alpha=\alpha$. Now, lets see:
- $\mathbb{E}[W_t - W_s]=\mathbb{E}\left[\sum\limits_{i=t-s}^t \varepsilon_i \right]=\mathbb{E}[\varepsilon_{t-s+1}+\varepsilon_{t-s+2}+\varepsilon_{t-s+3}+...+\varepsilon_{t-s+s}]=\mathbb{E}[\varepsilon_{t-s+1}]+\mathbb{E}[\varepsilon_{t-s+2}]+\mathbb{E}[\varepsilon_{t-s+3}]+...+\mathbb{E}[\varepsilon_{t-s+s}]]=0$
- $Var(W_t - W_s)=\mathbb{E}[((W_t - W_s)-\mathbb{E}[(W_t - W_s)])^2] =\mathbb{E}[(W_t - W_s)^2]-(\mathbb{E}[W_t - W_s])^2$ since $\mathbb{E}[W_t - W_s]= 0$ we have that
$Var(W_t - W_s)=\mathbb{E}[(W_t - W_s)^2]=\mathbb{E}\left[ \left(\sum\limits_{i=t-s}^t \varepsilon_i \right)^2 \right]=\mathbb{E}[(\varepsilon_{t-s+1} +\varepsilon_{t-s+2}+\varepsilon_{t-s+3} +...+ \varepsilon_{t-s+s})^2]$. Since all cross terms for which $t-s+i \neq t-s+j$ equal zero if $i\neq j$ we have that
$ \mathbb{E}[(\varepsilon_{t-s+1}\varepsilon_{t-s+1}+\varepsilon_{t-s+2}\varepsilon_{t-s+2}+\varepsilon_{t-s+3}\varepsilon_{t-s+3}+...+\varepsilon_{t-s+s}\varepsilon_{t-s+s}]=\mathbb{E}[\varepsilon_{t-s+1}\varepsilon_{t-s+1}]+\mathbb{E}[\varepsilon_{t-s+2}\varepsilon_{t-s+2}]+\mathbb{E}[\varepsilon_{t-s+3}\varepsilon_{t-s+3}]+...+\mathbb{E}[\varepsilon_{t-s+s}\varepsilon_{t-s+s}]=\sigma_\varepsilon ^2+\sigma_\varepsilon ^2+\sigma_\varepsilon ^2+...+\sigma_\varepsilon ^2=\sigma_\varepsilon ^2(t-s)=t-s$ - $Cov(W_t-W_s,W_u)$ for $u<s$. Note that since $u<s<t$ the process up to $W_u$ will be uncorrelated with the errors contained in $W_t-W_s$. Therefore, by the same logic as before $Cov(W_t-W_s,W_u) = 0$.
When you say
Let $x(t), x(0)=0$ be a Wiener process with the parameters $a$ and $\sigma$
you presumably mean that $x(t)=at+\sigma W(t)$ where $W(t)$ is a standard Wiener process, i.e. $W(0)=0$, $E[W(t)] =0$ and $W(t)-W(s) \sim N(0, t-s)$ when $t \gt s$.
Then $E[x(t)]= aE[t] + \sigma E[W(t)] = at+0=at $.
Assume $s$ is the minimum of the two, so $0 \lt s \lt t$. Then $W(t)-W(s)$ and $W(s)-W(0)$ are independent with zero means so $E[(W(t)-W(s))(W(s)-W(0))]=0$ so $E[W(t)W(s)]=E[W(s)^2]=Var(W(s))=s$.
So $Cov(x(t), x(s)) =E[(x(t)-E[x(t)])(x(s)-E[x(s)])] = E[(\sigma W(t))(\sigma W(s))] = \sigma^2 s$. This squaring of $\sigma$ is not quite what your question asked to prove, so I suspect there might be an error in the question.
$\endgroup$