If #1 were true, then we would never exist, since the future always
has a greater probability density than the present and the past. If #2
were true, then we would also never exist, for the same reason as #1.
This is incorrect. 1 and 2 just suggest that we are unusual according to a particular parameter. Being unusual according to a particular parameter is the most usual thing there is, and you don't even need two parameters to make that be true. All you need to do is shift where you put the arbitrary origin of the parameter.
For example, suppose that in the distant future, the total number of humans that have ever existed is 10^100. If we parameterize humans by number of past humans, then we are part of a very unusual 100-gigahuman sample. If we parameterize humans by number of humans between you and the 0.33 x 10^100th human - exact same parameter, different origin - all of a sudden we're smack on top of the bell curve.
The doomsday argument cited in a previous answer is also nonsense for a different reason: statistics only generates useful predictions about what's likely given the available evidence if you use the available evidence.
For instance, suppose Bob has a cat. Bob knows that cats have an average life expectancy of 14 years. If all Bob knows is that he has a cat and that cats live for 14 years, Bob should expect his cat to live another 7 years. On the other hand, if Bob has a sick 20-year-old cat whose siblings all died years ago, he should not expect his cat to live more than another year or two. If Bob has a healthy kitten from a long-lived breed and plans to take good care of it, he could reasonably expect his cat to live another 17 years. If Bob has just run over a cat in his truck, he should not think to himself, "No big deal, it's a cat. Statistics says it'll probably live another 7 years!"
The valid statistical inference is not "If Bob has a cat and cats live 14 years on average, then Bob's cat will probably live 7 more years."
It's "If the totality of known facts is: [Bob has a cat; cats live 14 years on average], then Bob's cat will probably live 7 more years."
Try another one. There is a raffle. You have 1 out of 100 tickets. There will be 10 winners. You start by predicting a 10/100 chance of winning.
The first winning ticket is identified. It is not yours. You now predict a 9/99 chance of winning. The second winner is read - 8/98. All of a sudden, a dozen angry badgers are set loose in the crowd. Maybe the raffle will close early and there won't be any more winners. Maybe the first two winners will be eaten by badgers and two new winners will be drawn. You just don't know. Adding more knowledge - first about the first two winners, then about a dozen angry badgers - at first decreased your probability of winning, then greatly increased the error bars on your estimates.
In this case, if literally all you know about humans is that you are one and that they die, then the cited argument holds: you are indeed 95% likely to be among the last 95% of humans. And 95% likely to be among any other group of 95% of humans. Including, for example, the 95% of humans whose circumstances are the least like the circumstances that we actually measure.