Caution
Section omitted to comply with the Honor Code
Time Series Analysis
Oren Bochman
October 23, 2024
time series, stationarity, strong stationarity, weak stationarity, lag, autocorrelation function (ACF), partial autocorrelation function (PACF), smoothing, trend, seasonality, differencing operator, back shift operator, moving average
Section omitted to comply with the Honor Code
---
date: 2024-10-23
title: "Homework - MLE and Bayesian inference in the AR(1)"
subtitle: Time Series Analysis
description: "This lesson we will define the AR(1) process, Stationarity, ACF, PACF, differencing, smoothing"
categories:
- Bayesian Statistics
keywords:
- time series
- stationarity
- strong stationarity
- weak stationarity
- lag
- autocorrelation function (ACF)
- partial autocorrelation function (PACF)
- smoothing
- trend
- seasonality
- differencing operator
- back shift operator
- moving average
---
::::: {.content-visible unless-profile="HC"}
::: {.callout-caution}
Section omitted to comply with the Honor Code
:::
:::::
::::: {.content-hidden unless-profile="HC"}
1. Consider an autoregressive model given by $y_t = \phi y_{t-1} + \epsilon_t, \quad \epsilon_t \stackrel{i.i.d.}{\sim} N(0,1)$, The maximum likelihood estimator of $\phi$ based on $T$ observations and using the full likelihood can be found by maximizing the expression:
- [ ] $-\sum_{t=2}^{T} (y_t - \phi y_{t-1})^2$
- [ ] $\sum_{t=2}^{T} (y_t - \phi y_{t-1})^2$
- [x] $\log(1-\phi^2) - y_1^2(1-\phi^2) - \sum_{t=2}^T(y_t - \phi y_{t-1})^2$
- [ ] $- y_1^2(1-\phi^2) - \sum_{t=2}^T(y_t - \phi y_{t-1})^2$
2. Consider an autoregressive process of the form
- [ ] $\hat{v}_{MLE} = \frac{\sum_{t=2}^T (y_t - \hat{\phi}_{MLE} y_{t-1})^2}{(T-2)}$
- [x] $\hat{v}_{MLE} = \frac{\sum_{t=2}^T (y_t - \hat{\phi}_{MLE} y_{t-1})^2}{T}$
- [ ] None of the above
- [ ] $\hat{v}_{MLE} = \frac{\sum_{t=2}^T (y_t - \hat{\phi}_{MLE} y_{t-1})^2}{(T-1)}$
3. Mark all the true statements below:
- [ ] Consider and AR(1) with parameters $\phi$ and $\nu$. The maximum likelihood estimator for $\nu$ obtained using the full likelihood and $T$ observations is an unbiased estimator.
- Consider and AR(1) with parameters $\phi$ and $\nu$. The posterior mode for $\phi$ obtained assuming a reference prior, the full likelihood, and based on $T$ observations, is equal to the maximum likelihood estimator for $\phi$ under the full likelihood.
- [ ] Consider and AR(1) with parameters $\phi$ and $\nu$. The maximum likelihood estimator for $\nu$ obtained using the conditional likelihood and $T$ observations is an unbiased estimator.
- [x] Consider and AR(1) with parameters $\phi$ and $\nu$. The posterior mode for $\phi$ obtained assuming a reference prior, the conditional likelihood, and based on $T$ observations, is equal to the maximum likelihood estimator for $\phi$ under the conditional likelihood.
4. Mark all the true statements below:
- [ ] Consider and AR(1) with parameters $\phi$ and $\nu$. Maximum likelihood estimation for $\phi$ under the conditional likelihood is available in closed form, i.e., we can write down the MLE explicitly as a function of $y_{1:T}$ without requiring a numerical optimization method to obtain this MLE.
- [x] Consider and AR(1) with parameters $\phi$ and $\nu$. Maximum likelihood estimation for $\phi$ under the conditional likelihood is not available in closed form, i.e., we cannot write down the MLE explicitly as a function of $y_{1:T}$ without requiring a numerical optimization method to obtain this MLE.
- [x] Consider and AR(1) with parameters $\phi$ and $\nu$. Posterior inference under a reference prior and the conditional likelihood can be done using direct sampling by first sampling $\nu$ from an inverse-gamma distribution and then sampling $\phi$ conditional on $\nu$ from a normal distribution.
:::::