I'm answering the questions I posed. While cygnusv answer was correct for the idea I presented, it was not helpful to me as I had already identified and solved that issue, but failed to present the correct algorithm in my question. I'm not saying cygnusv is wrong; the answers presented are very much correct and I gave a +1. However, there were more questions that were not addressed.
The problem of zeros on the right can be fixed simply by appending a '1' bit or the string length to the input string. Now I was doing this already in the toy hash function that was using this idea, but it was applied before the unpack_factors function call so I neglected to mention it (this was completely my fault).
With this fix in place, the function gains what I will only refer to as collision resistance, with that statement being based solely on empirical testing by supplying inputs and recording outputs and not theoretical conjecture.
The hardness of factoring
In regards to the "hardness of factoring", this is something that really only applies to RSA style composition, where at least one of the factors is very far away from 0 on the number line.
Factoring numbers composed of large primes is slow because of the large number of primes to test. If a number is only composed of small primes, it can be factored efficiently even by a naive factoring algorithm.
For instance, unpacking basically any bitstring of all 1's would be equivalent to 2 ** N, where N is the length of the string. Even a basic factoring algorithm should solve this almost immediately, more or less regardless of how long the input string is.
However, unpacking a bitstring of lots 0's with a single 1 bit at the end would take significantly longer, and if the quantity of 0's was long enough to push the prime in question far enough to the right of the number line, then it would be "safe" in that it would never finish. However, in order to compose such a number, we would need to generate that prime, which is equally time consuming, and thus not a situation we can rely on.
Ouput truncation and rotation
Now, as for truncating the output, this has an interesting effect. It not only removes the collision resistance, it makes it trivial to calculate an infinite number of preimages for a given output. If you are only taking the last N bytes of output, then anything which unpacks and truncates to having those last N bytes as being the same is a preimage.
In other words, given an output, you can append any byte(s) you want to the left of it, factor it, and recover a valid preimage. While it does make it to where you won't know exactly what input produced the output, you can find an infinite set of inputs that it will reside in.
I would think rotating the output bits would increase the complexity of recovery by the max quantity of bits rotated this way when rotated by a data dependent amount, but am not totally sure. I couldn't think of any effects like what truncation has for rotation though, besides probably creating collisions.
0
. $\endgroup$