- In this course we learn to:
- We will build the following skills:
- Probability Distribution
- R Programming
- Time Series Analysis and Forecasting
- Statistical Modeling
- Predictive Modeling
- Advanced Analytics
- Bayesian Statistics
- Sampling (Statistics)
- Statistical Methods
- Data Analysis
- Technical Communication
- Markov Model
- four Analysis
- There are five modules in this course:
- Bayesian conjugate analysis for the autoregressive Time Series Models: We will focus on AR(P) models that fit in the Normal conjugate family, which means that the prior and likelihood lead to a posterior that is also in the Normal family.
- Model Selection Criteria: We will review AIC, BIC and DIC theory and Code for use in selecting the order of the AR(P) model as well as the number of components in the mixture model which we will develop in the next module.
- Bayesian location mixture of AR(P) model: This module covers a new kind of model, in which we extend the AR(P) model from using a single location to use a location mixture. These lead to conditionaly conjugate components, keeping everything more or less normal but with the many benefits arising from using a mixture model.
- The peer reviewed data analysis project: We will develop the model and then evaluate other students’ models.
In this final Course of the Coursera Bayesian Statistics Specialization we get to put what we have learned in the previous course into a small data science project.
The instructor Jizhou Kang (红领巾) was a doctoral student at UCSC. The course covers little in the way of new material but reviews most of the required material from the previous courses.
I found the a little hard to follow at times. His explanations are often brief and his english is a little broken. However it is clear that he has taken the time to prepare very clear presentations of the material for this course.
In my notes I tried to add cross references to where we originally covered the material in earlier courses. But in retrospect I realized that while Kang does repeat some of the earlier material like AR(P), AIC, BIC, and DIC etc he quickly casts it into the model we are working on and this makes the presentation both clear and not a repetition of the theory.
I also explored his personal website and found that is contains some very interesting material.
He has written a book called Cracking The First Year Exam This book is a collection of questions from 5 first year statistics courses. I noticed that some of the instructors from the specialization feature in this book.
He has made extensive notes for a course on Stochastic Processes. This is one of the of the frist year MA courses I haven’t had the oppertunity to take and I will go over these notes as soon as I complete this course. I wanted to explore the connections between stochastic processes and Bayesian statistics, as I believe they can complement each other well. I also found that he has taught a short course on Bayesian Causal Inference which is based primarily on research papers. And it seems
In Classical Inference he covers material from a frequentist type statistical inference course which follows Casella and Berger (2002). This material is covered to some extent in the first course of the specialization but not in any great detail. So it may be useful to review this material as well. In fact some of it fills gaps in the material I felt missing and added as appendices like a proof of the central limit theorem and number of results related the law of large numbers. I wish I could say that this is easy reading after taking this specialization but it should look fairly familiar. A few other topics that I find of interest are chapter 9 - on the method of moments and
In Bayesian Causal Inference - slides and Bayesian Causal Inference - report he reviews the material from papers on Bayesian Causal Inference.
Overall I think he was on one of the best explainers of the complex material. One downside is that is english is a little broken but I’ve had much worse. However you can see from the first slide that it covers the key concepts in a clear and concise manner leaving out very little. Also it seems to me that Jizhou Kang still remembers how hard this material is at first glance which is a great asset for teaching it effectively. You may notice that we revisit some old material this is because this course makes a great effort to be self contained.
Many parts of this course are steps towards completing the capstone project. And perhaps just as exciting we will also cover a new type of model - a mixture version of the autoregressive model from the previous course.
We will cover the Bayesian conjugate analysis for autoregressive time series models. This sounds a bit bombastic, all it means is that we will be using a likelihood and a prior that leads to a conjugate posterior, keeping everything within one distributional family!