The situation: Our team of developers and testers is transitioning from ClearCase to git, in some pioneering fashion. While experience with git is limited there is some familiarity with Linux, cygwin and msys; nobody is afraid of the command line, and people were generally not very happy with ClearCase (although, of course, there was a functioning workflow). We have the not-so-uncommon setup of a central remote repository which the team members use for exchanging their contributions.
One of the major differences between git and Clearcase is that git stores versions of the entire source tree, while (base) ClearCase famously focuses on single files and directories. In ClearCase a history of the whole source tree (the sequence of check-in operations) is practically impossible to obtain, while it is a simple and often-issued git log
in git.
As indicated in the title, one of the roles for a version control system is backup. I don't want to lose more than a day or so of work in a disk crash (1), so I check in/push about daily, even incomplete work. With ClearCase the second role, "publishing", is in our workflow realized by labeling. The "lowest quality label" is a moving label which a developer places on the file versions which as a set are in some working condition. Other team members see only labeled versions (except for what they work on). Checking in often was not a problem with this ClearCase workflow. Other developers would only be confronted with it when they looked into a file history or file version tree. It would not affect their work.
With git, frequent commits, especially of immature code, are a nuisance which is usually avoided by local rebasing before pushing or merging. Unfortunately this remedy is not available after a push: I cannot rebase published history (the server does not even allow force pushes). But I must push frequently for backup. This conundrum exists even though I work on a feature branch, because pushing to a shared repo amounts to a sort of "publishing" even before the feature branch is officially "published" by merging it back into master. After that merge, all of the "dirty" commit history is polluting master.
Thus my wish, and the team's requirement, to produce a legible, meaningful publishing history collides with the need to regularily backup my work. This was less an issue in ClearCase because the history steps are mostly hidden, and there is no overall archive history which is incremented by every single commit to a file (which is actually a problem, of course).
How do other people handle this? I could probably have a second, private remote repo somewhere on a network share (which would also allow force pushes) just for backup purposes, and then publish to the team repo only after rebasing and polishing. But I have never heard of such a workflow, and it seems cumbersome.
Is it simply that most people do not backup that often (say, only every week or so)? Is that acceptable?
(1) And of course my local repo resides on the same disk as the working tree.