[Reading] Machine Learning

less than 1 minute read

Published:

  1. G. C. Cawley and N. L. C. Talbot, β€œOn Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation,” Journal of Machine Learning Research, vol. 11, no. Jul, pp. 2079–2107, 2010.

    model selection, performance evaluation, bias-variance trade-off,selection bias,over-fitting

    The paper provides the empirical analysis of the errors in model selection and performance evaluation. The errors can be broken down into bias and variance. During the model selection, the author demonstrated that a low variance is at least as important as a low bias. A high variance can lead to over-fitting in model selection, and hence poor generalization performance. The over-fitting in model selection can be overcome by regularization or early stopping. When regarding performance evaluation, it is preferred to evaluate performance over a wide range of data sets with nested cross-validation, using multiple randomized partitioning of the available data, with model selection performed separately in each trial. Although the study concentrates on cross-validation based model selection, the findings are general to any model selection practice.

Leave a Comment