The thing I don’t understand though is why we wouldn’t just take the IR data and ‘unshift’ the light since we know the distance of the objects we are observing? Is the process described above in some convoluted way equivalent to this idea of unshifting the colors?
I think this is a great question, but some readers may not understand it.
Suppose we have a series of images of the same object taken through several different fairly narrow band (say 10%) filters. Each image is named after the central wavelength of its filter.
Then suppose through spectroscopic data the redshift $z$ is determined.
We can "re-name" each image with a new central wavelength that would be red-shifted by $z$ to the old one.
Those won't necessarily be red/green/blue, say 650, 550, 450 nm, nor will they be good approximations to the spectral sensitivity of our eye, and depending on the data, they may often no longer represent visible wavelengths at all.
In rare cases it might be possible that there are combinations of images and red shifts where two or potentially three visible colors can be recovered, and some semblance of a roughly realistic rendering could be reconstructed.
But it would probably be very boring to look at!
Aa @JohnDoty points out:
JWST covers over five octaves of spectrum, while the visible spectrum is less than an octave. So, even after any shift you might apply, most of the information in the data would be invisible to human eyes.
Images taken through many filters from many telescopes are available online. The ones from JWST may not be immediately available after they are taken for us to play with, but until then, there are plenty of multi-filter image sets from Hubble one can use to practice de-shifting.
You can just try this yourself, and if you need help, just as a new "how to de-shift multi-filter images" question. I predict that the answer will be boring-looking and not so informative, but give it a try!