Let's start here:
If we assume that the purpose of consciousness is to minimize the difference between the predicted outcome & inevitable outcome (as I can comprehend from the above statement in the same answer), what purpose does that serve, i.e., what gain does an experiencing entity (like us) gain from that prediction/experience, when the flow is already deterministic?
I think you're confusing determinism with fatalism. Suppose I'm in a room with an oracle (call him Nostradamus), and we play a simple game. I pick a number from 1 to 10. But before I do, Nostradamus predicts what I will pick. Now since it helps define the difference, suppose Nostradamus actually tells me what his prediction is; in particular, suppose he says I pick 3.
If fatalism is true, then I basically must pick 3; the oracle predicted it, after all, so it cannot be avoided. But if determinism is true, I can easily pick 4.
This is actually easy to do. I just pick 3 if Nostradamus says I will pick any number but 3. But I pick 4 if he says I will pick 3. All we need for me to have this strange capability is determinism; I don't need free will, and indeterminism actually hampers this ability; the trick to spiting Nostradamus's prediction is to react to what his prediction is. I don't even need to be conscious; we could program Alexa to fill my role if you like.
Fatalism makes the predictions pointless, but determinism doesn't. If you can avoid a bad outcome by predicting it would happen, then reacting to the prediction, you can gain a survival advantage. Having this ability requires neither free will, nor indeterminism, nor even consciousness; simply the ability to model and react to the model. Robots can duck if rocks are thrown at them. Self driving cars can apply breaks if they predict otherwise that they will collide with an object. Nothing mysterious is required for "evitability".
So if consciousness were mechanistic, and it in part played a role of avoiding bad outcomes by predicting them and then avoiding them, it could be a survival advantage; and such would not require anything non-deterministic. Simply modeling and reacting to the model suffices.
But let's suppose consciousness is not mechanistic; instead, let's presume it's an epiphenomenon; in particular:
The conscious experiences that accompany brain processes are causally impotent.
Under this premise, let's consider this question:
Again, if the next conscious experience is independent of the past conscious experiences, what real help is consciousness doing?
Then the answer is simple. It's of no help; since consciousness is causally impotent, it plays no role in our survival.
Counterintuitively, however, this only means that consciousness does not help. It does not necessarily mean that having consciousness does not help. In particular, it could very well be the case that there are particular sorts of mechanisms that, if we had them, would grant us a big survival advantage. It could also be that those mechanisms just so happened to be such that, were they in play, they would somehow result in an epiphenomenal consciousness. Were this the case, having consciousness by means of having those mechanisms would mean we have those mechanisms. In other words, having consciousness could still correlate to having a survival advantage (because a mechanism granting us such could be a "confounding variable"), even if it does not cause it.
Update: The comments seem to confirm the conflation of determinism with fatalism, so lets dig in here. As noted in the comments, when considering Nostradamus we are adding an assumption that the oracle is possible.
Determinism versus Fatalism
Let's get to basic definitions. Determinism can be defined as the premise that every effect is the result of antecedent causes. Let's call the presumption of determinism (P1).
Now let's swap Nostradamus out for "some guy", call him Ralph. I now want to build a prediction-spiter; let's say it's the non-conscious "Alexa" device. I want to program Alexa in such a way that if Ralph says 3, Alexa says 4. If Ralphs says 1, 2, 4, 5, 6, 7, 8, 9, or 10, Alexa says 3. Now this is what I want to do, but I can't necessarily do that by assuming determinism. So I need another premise. Let's presume it's possible to program Alexa this way. Call that (P2).
Now let's explore what happens. If we take them as Ralph-says-x/Alexa-says-y tuples, the possibilities form the exhaustive set:
S={(1,3), (2,3), (3,4), (4,3), (5,3), (6,3), (7,3), (8,3), (9,3), (10,3)}
No member of this set is such that x=y; in other words, it's not possible for Ralph to say what Alexa will say. But what's more telling here is that this is a perfectly reasonable description of an entirely deterministic universe. Every possible scenario in S
follows determinism, by definition; each effect is an inevitable result of an antecedent cause. All that's happening here is that we're claiming that what Ralph says is the antecedent cause, and what Alexa says is the effect.
Now, swap Ralph back out with Nostradamus. Nothing here should change about the mechanics of the universe; the only thing that changes here is the presumption that Nostradamus can fill Ralph's role, and be an oracle. But that's not necessarily possible, so we need another premise. Let's suppose it's possible in this scenario for Nostradamus to predict what Alexa will say, and for Alexa to be unable to do anything else. Call this premise (P3).
We now have a contradiction. Suppose Nostradamus makes a prediction (P3 presumes this is possible), and call it x1. (P3) implies Alexa must say y1, such that x1=y1. That conflicts with (P2), because (P2) does not have an (x,y) where x=y in its set of possibilities.
But a contradiction doesn't tell you that a particular premise is wrong; it only tells you something is. So let's break this down:
- {(P1),(P2),(P3)} leads to a contradiction.
- {(P1),(P3)} would work just fine, though; if we don't assume that building Alexa like this is possible, then any Alexa we build must have an inevitable and predictable outcome (by P3, which presumes that just this is possible). However, this is insufficient to argue that (P1) leads to fatalism; we can only say that {(P1),(P3)} does.
- {(P1),(P2)} also works fine. We already explored this. Everything about the Ralph scenario is consistent with determinism. But in regard to the suggestion that (P1) per se leads to fatalism, this is telling.
In particular, the fact that {(P1),(P2)} is a valid consideration means that determinism per se does not lead to fatalism. The argument that Nostradamus shows (P1) leads to fatalism is flawed for this reason; in particular, the flaw is that it imports fatalism itself into the premises, thus begging the question.
Note something crucial here, however. The impossibility of an oracle under determinism requires two things: (1) A spite-machine, (2) an oracle's prediction specifically being fed as an input. Were Nostradamus not to "interfere", we could always import him into a deterministic universe. Alexa's not breaking anything; Alexa's not conscious, doesn't have free will, and isn't indeterministic. If Nostradamus were an all knowledgeable calculator, it would be trivial to work out what Alexa would do. It's only if the result of that were itself an input that we can wire in contradictions. In other words, the conflict here has nothing to do with consciousness per se; or even free will; it's simply a result of effects having antecedent causes, an oracle being an antecedent cause, and the mechanism to "counter" that cause. But this description in itself is enough to import "evitability", which gives you survival advantages.
the universe is in no way deterministic
cannot be backed up fool-proof.