Spence Green

التكرار يعلم الحمار

I work at Lilt. In addition to computers and languages, my interests include travel, running, and scuba diving. more...

Reading Up On Bayesian Methods

without comments

For the next few months I’ve decided to focus on semi-supervised learning in a Bayesian setting. At Johns Hopkins last summer I was introduced to “fancy generative models,” i.e. various flavors of Dirichlet Process, but I was slow on the uptake. Now I’m trying to catch-up. Here are some helpful reading lists:

In addition to a thorough understanding of MCMC–which is relatively simple–it’s also important to at least have an awareness of variational methods, which is relatively hard. Jason Eisner recently wrote a high-level introduction to variational inference that is a soft(er) encounter with the subject than the canonical reference:

M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 1999.

Where will this lead? It is argued that the Bayesian framework offers a more appealing cognitive model. That may be. What interests me is the pairing of Bayesian updating with data collection from the web. Philip Resnik recently covered efforts to translate voicemails during the revolution in Egypt as one method of re-connecting that country with the world. This data is clearly useful, but it what is unclear is how to use it to retrain standard (e.g., frequentist) probabilistic NLP models. Cache models, at least in principle, offer an alternative.

Written by Spence

February 8th, 2011 at 4:46 pm

Posted in Machine Learning,NLP

Leave a Reply

You must be logged in to post a comment.