1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Step-by-step explanation of K-fold cross-validation with grid search to optimise...

Discussion in 'Education' started by adb, Oct 8, 2018.

  1. adb

    adb Guest

    I'm well aware of the advantages of k-fold (and leave-one-out) cross-validation, as well as of the advantages of splitting your training set to create a third holdout 'validation' set, which you use to assess model performance based on choices of hyperparameters, so you can optimise and tune them and pick the best ones to finally be evaluated on the real test set. I've implemented both of these independently on various datasets.

    However I'm not exactly sure how to integrate these two processes. I'm certainly aware it can be done (nested cross-validation, I think?), and I have seen people explaining it, but never in enough detail that I have actually understood the particulars of the process.

    There are pages with interesting graphics that allude to this process (like this) without being clear on the exact execution of the splits and loops. Here, the fourth one is clearly what I want to be doing, but the process is unclear:

    [​IMG]

    There are previous questions on this site, but while those outline the importance of separating validation sets from test sets, none of them specify the exact procedure by which this should be done.

    Is it something like: for each of k folds, treat that fold as a test set, treat a different fold as a validation set, and train on the rest? This seems like you'd have to iterate over the whole dataset k*k times, so that each fold gets used as training, test and validation at least once. Nested cross-validation seems to imply that you do a test/validation split inside each of your k folds, but surely this cannot be enough data to allow for effective parameter tuning, especially when k is high.

    Could someone help me by providing a detailed explanation of the loops and splits that allow k-fold cross-validation (such that you can eventually treat every datapoint as a test case) while also performing parameter tuning (such that you do not pre-specify model parameters, and instead choose those that perform best on a separate holdout set)?

    Login To add answer/comment
     

Share This Page