Validation, cross-validation and bi-cross-validation

Wednesday, February 27, 2019 - 3:00pm to 4:00pm

Speaker:  Art Owen, Stanford

Program Description:

This talk is a survey of data holdout ideas from statistics, meant for a non-statistical audience. Many or even most statistical methods involve making some educated guesses especially about which model form to use.  Often knowing the correct model is a harder problem than making use of a model.  Observing future consequences of a model choice can often tell us whether the model was good, or at least, better than alternative.  Sample hold out methods, called cross-validation, spare us from waiting for later. They require an assumption that the data can be split into hold in and hold out parts where the held out data function just as future observations might.  When the data are not independent and identically distributed, then plain holdouts don't work properly.  For instance with movie ratings held out movies' ratings would be correlated with the held in ones because they had the same raters. Likewise held out raters' data would be correlated with the held in ones because of the common movies. That kind of tangled data can be handled via bi-cross-validation, holding out some movies and some ratings.
 

 

Validation, cross-validation and bi-cross-validation
Find Stanford Synchrotron Radiation Lightsource on FlickrFind Stanford Synchrotron Radiation Lightsource on YouTubeFind Stanford Synchrotron Radiation Lightsource on Twitter