Say that I used a nested cross validation to do SVM classification on an fMRI dataset with hyperparameter tuning ( using a linear or rbf kernel). The classification on my outer cross validation folds are good and consistent, and the selected models in all inner cross validations are relatively similar, so the model selection was fairly stable across all folds.
Now I want to run a permutation test on the classification to see whether the overall classification accuracy is significantly greater than chance.
My question is this: is each newly-permuted dataset supposed to go through the same nested cross validation procedure as the true dataset? If yes, I would imagine that this means the hyperparameters selected in the inner loops are all likely to differ on every permuted dataset (and different from the true dataset). The end result would be a null distribution made up of models with different hyperparameters. This strikes me as odd and possibly incorrect.
Alternatively, is each newly-shuffled dataset supposed to undergo a 1-layer cross validation in which every fold uses the same hyperpermaters selected during the true dataset analysis? This seems more natural to me, but I can't stop thinking that my null distribution might be biased since I'm using hyperparameters specifically tuned to the true dataset.