6.1, due on November 6

I had a hard time understanding example 6.1.5, because I didn't follow how the double sum over the combined estimate was broken down. It'd be good to go over why that's the case so I can understand how to break it down for other statistics.

What did make a lot of sense to me was the maximum likelihood estimation, because I can see how the likelihood of some parameter is equivalent to the density of a random variable given that parameter. Generally though, I think it'd be nice if the introduction to the section contained a little more in the way of examples, so I could get a real-life context to these estimations and likelihoods.

What's the difference between the maximum likelihood estimate and the estimator, if the equations look identical in every example?

I'm excited to see how these methods are applied to machine learning.

Comments

Popular posts from this blog

8.7, due on December 11

Finals preparation, due on December 13

8.4, due on December 4