I believe the question arises from a few misconceptions on computational models. I'll try to address those.
The question describes a program $p$ as a transition from a single state $x$ to another one $y$. This view does not agree with the common notion of program we use in computer science.
In CS, there is no "absolute" definition of "program". We typically define, on a case by case basis, a programming language as a set of strings (elements of a free, finitely generated monoid), and provide each of such strings with its semantics. This can be done in a plethora of distinct ways. We have small-step operational semantics that describes how a program state repeatedly changes from the beginning to the end of the computation (if there's an end). We have big-step operational semantics, that only relates the start and end states without caring about the intermediate ones. We also have other styles of semantics such as denotational, axiomatic, ... that do not focus so much on program states.
That being said, let us limit to operational semantics. Even in such case, a program does not just map a single state $x$ to a single state $y$. A program can always receive some input from outside, and act accordingly. So, at the very least, the same program can start from multiple initial states $x$ (assuming the input is encoded in some way in the state), and -assuming termination- will reach a potentially distinct state $y$ for each input.
The question then mentions that this is invertible because there is always a program $q$ that can change $y$ back to $x$. While that might be technically correct in many computational models, this depends on a quantification of the variables that makes this result useless in practice.
Indeed, what happens is that for each $x$ and each $y$ there is a program $q$ that changes $y$ back to $x$. That is true only because we can choose $q$ depending on $x$ and $y$. Moreover, that does not imply that there is a program $q$ that can change any end state $y$ to "its own" starting state $x$. The main misconception here is having a program $q$ which is only required to act on a single case, the states $x$ and $y$. This is not a useful view: any notion of "program" must be able to work with many (often infinitely many) cases.
The question then mentions program determinism to claim that computation can be always reversed. Determinism is a fundamental property that simply states "same input, same behaviour", or "same input, same output". In other words, it states that a program behaves like a (partial) function. This however, does not ensure that the function is invertible.
Consider a program that, given a natural number as input, outputs its half, rounded down. There is no general way here to compute the input given the output.
Finally, let me add that in many computational models we do have some way to compose programs, in the same way we can compose functions. This however only forms a monoid, not a group.
To have a group you'd need to restrict the computational model so that it can only compute bijections. Even if this is not the common case in CS, one can still choose to explore this realm. Indeed, as far as I know, that's the aim of research in reversible computation (including its applications to quantum computing).
p
andq
is allowed to runq
until it halts, then use the output as the input top
. (Whether or notq
does halt does't matter; it just means the composition doesn't halt unlessp
andq
both halt.) $\endgroup$p
cannot continue after HALT. That doesn't mean a machiner
that's simulatingp
cannot continue afterp
halts. And just because composition exists doesn't mean every pair of programs must be composable. (You can type your programs and have a family of composition functions, or you can define a single untyped composition function that simply throws an error if the output ofq
is incompatible with the input ofp
.) $\endgroup$