It sounds like you need multivariate testing. For example, select four elements on the page you'd like to test. You could radically redesign each of the elements so that in one version of the test, you're testing a mostly redesigned page. Then those four elements would be turned "on" or "off" alternately. You'd end up with 16 variations against your control (the original design).
So say your elements are the numbers 1, 2, 3 and 4. You'd test the variations like the pattern below, with the dashes (-) meaning that the element was not "turned on" in that variation:
1234
1-34
1--4
-234
12--
123-
1-34
1-3-
(etc.)
Smashing Magazine has a good article on multivariate testing.
As to the question if one method is better than the other, with the multivariate testing (versus A/B), you will know exactly which element caused a greater click-through rate or improved the usability of the page since it will be used with and without all of the other variants.
With A/B testing, especially with testing a broad-sweeping redesign, you will not know if you could have done better because if everything is changed, you can't attribute the change to any single element.
I know as a designer, I love to do big redesigns, but often what helps the end user is the careful testing of single elements. And I firmly believe that, especially in application flows or servicing interactions, the best UI changes are the ones that are invisible to the end user. By making gradual changes of elements that test better, over the course of time you'll have a completely redesigned page, but your user will never notice because the new elements were introduced slowly.
This way, you'll be able to avoid the cognitive dissonance that a one-and-done redesign can cause.