A lot depends on the purpose and claims of the paper. If you claim to fully solve a problem, you'd better address the key lines of evidence and appropriate consistency criteria for a theory. (Note the restriction from all to key, as some observations are less certain or carry less weight than others.)
But there is no need to make this a forced dichotomy between addressing all or only one of the key lines of evidence; as an author you should strive to write good, accurate papers that address what you can. Certainly, full agreement with all key lines of evidence need not be demonstrated in the initial paper, rather it may be enough to sketch a promising path forward (which may require expertise you don't have). In general, promising ideas are worth exploring, and are thus worth publishing. On the other hand, if you propose a model of dark matter that can never explain the observed galaxy rotation curves, your model just does not stand much chance as an explanation for our universe. I can still imagine cases where it would be valuable as a toy model or to provide constraints on future models, but any such paper should spell out this restriction.
This is all research, so much is always in flux or unclear. It is thus much more common to write papers addressing some specific properties and to treat them as pieces in a much larger jigsaw puzzle than it is to claim to have arrived at a true resolution. Genuine scientific progress is typically messy and hard to capture in a single paper. There are always future questions to be addressed. This is especially the case if your paper (like many of the most important theoretical physics papers) makes novel predictions.
Consider Steven Weinberg's famous A Model of Leptons paper, Phys. Rev. Lett. 19, 1264 (1967). This short letter brought together many ideas, and unified the electromagnetic and weak interactions. Weinberg did not declare this the theory of electroweak interactions, he just called it a model. And he flagged that the question of whether the model was renormalizable would need further investigation. That question was addressed affirmatively in 1971 by Gerard 't Hooft. Following experimental confirmation of weak neutral currents and the development of quantum chromodynamics, the remarkably successful standard model of particle physics was formed.
Option #1 seems like it requires a formidably large amount of work, taking at least several years to complete. There'd also be a high failure rate, so it would not be easy to even embark on such an endeavor.
Well, yes, nature provides many constraints on what kind of physical theories are useful for describing our universe. This is a feature, not a bug. However, we also don't want to stay trapped in local but not global minima in "theory space", so appropriately disclosed speculative excursions in various directions can be useful. Hence, as I outlined above, papers rarely go to the extreme of your option #1. However, it is a common criticism that the current funding climate and publish-or-perish culture systematically discourage the type of long-term and risk-taking efforts that are needed for truly groundbreaking work. Instead, more and more papers are produced, most incremental or of dubious value. After all, that's what the system rewards.
On the other hand, with option #2, it seems like a natural question the reviewer will ask is "What about the other lines of evidence?" and to conclusively answer that would require doing option #1.
It is a reasonable question to ask, but reasonable reviewers and editors also understand that some aspects will be left for future work. Again, if you can demonstrate your approach is promising also regarding aspects not addressed head-on in your paper your work is likely to be taken more seriously. If there are strong reasons to think your approach will never work in these regards, it might not be worth publishing.
Furthermore, I would also expect option #2 to lead to New Theory Paper: Part #2 which says "We tried to explain ___ with our theory, but we couldn't get it to work, therefore the theory is probably wrong", and I'm not aware of any papers of this kind.
I don't have an example of a paper like that off the top of my head. Of course, there is a bias against publishing purely negative results, with the possible exception of comment papers by other authors. Many other things can occur too: perhaps an observational paper comes out noting disagreement with the theory, or perhaps a new paper is written by the original author(s), highlighting some issue and proposing a tweak to resolve it or an entirely different model. But sometimes a dead end is a dead end and there are no more papers. The citation record can be a helpful heuristic for figuring out the usefulness of specific papers.