Open Access
May 2012 Statistical Significance of the Netflix Challenge
Andrey Feuerverger, Yu He, Shashi Khatri
Statist. Sci. 27(2): 202-231 (May 2012). DOI: 10.1214/11-STS368

Abstract

Inspired by the legacy of the Netflix contest, we provide an overview of what has been learned—from our own efforts, and those of others—concerning the problems of collaborative filtering and recommender systems. The data set consists of about 100 million movie ratings (from 1 to 5 stars) involving some 480 thousand users and some 18 thousand movies; the associated ratings matrix is about 99% sparse. The goal is to predict ratings that users will give to movies; systems which can do this accurately have significant commercial applications, particularly on the world wide web. We discuss, in some detail, approaches to “baseline” modeling, singular value decomposition (SVD), as well as kNN (nearest neighbor) and neural network models; temporal effects, cross-validation issues, ensemble methods and other considerations are discussed as well. We compare existing models in a search for new models, and also discuss the mission-critical issues of penalization and parameter shrinkage which arise when the dimensions of a parameter space reaches into the millions. Although much work on such problems has been carried out by the computer science and machine learning communities, our goal here is to address a statistical audience, and to provide a primarily statistical treatment of the lessons that have been learned from this remarkable set of data.

Citation

Download Citation

Andrey Feuerverger. Yu He. Shashi Khatri. "Statistical Significance of the Netflix Challenge." Statist. Sci. 27 (2) 202 - 231, May 2012. https://doi.org/10.1214/11-STS368

Information

Published: May 2012
First available in Project Euclid: 19 June 2012

zbMATH: 1330.62090
MathSciNet: MR2963993
Digital Object Identifier: 10.1214/11-STS368

Keywords: Collaborative filtering , cross-validation , effective number of degrees of freedom , Empirical Bayes , Ensemble methods , gradient descent , latent factors , nearest neighbors , Netflix contest , neural networks , Penalization , prediction error , recommender systems , restricted Boltzmann machines , shrinkage , Singular value decomposition

Rights: Copyright © 2012 Institute of Mathematical Statistics

Vol.27 • No. 2 • May 2012
Back to Top