10

I am asking about this git branching model or workflow. I really like this. It strikes me as very intuitive and productive, but what I am asking for is whether there are any flaws or negatives to this approach that are not yet clear to me (coming from another world where ClearCase ruled the day).

(You don't need to answer every question, whatever you can is helpful)

  1. Do you use this or a similar git branching workflow?

  2. Do you consider this a productive approach?

  3. Do you see any flaws with this approach? Any potential downside?

  4. If you have a better approach, would you mind sharing, or providing a link to an article or discussion about it?

5 Answers 5

6

For the most part, this is the usual workflow employed with any VCS we've used so far. With some (CVS, SVN) it's harder to do, with GIT it's trivial. That said, I have two remarks:

First, there are two schools of thought when it comes to feature branches:

  1. Merge them
  2. Rebase them

(1) is what the article seems to suggest. The problem with merge commits are so-called Evil Merges. Specifically, those that join development paths where a function has changed semantics in one of the branches, but automatic merge fails to patch up all occurrences in the code coming from the other branch. Regressions introduced that way are notoriously hard to debug. As a GIT user, you can usually be much more relaxed about regressions, because you have git bisect to find their causes for you automatically. However, in the situation described, git bisect will point out the merge commit, which doesn't help you at all.

(2) avoids this problem by trying to maintain as linear a history as possible. Those opposing rebases claim it invalidates any testing you may have done prior to the rebase.

Personally, I'm firmly in camp (2), because I value the validity of git bisect results more than the potential loss of test coverage, which is easily compensated for by using a proper CI system.

Second, I have decided for me that pushing between developers is rarely a good idea. There are security issues involved in allowing everyone to ssh into your box to fetch, or running git-deamon locally, and, more importantly, in teams not extremly small, the oversight can get lost rather rapidly.

That said, I'm all in favour of a staging repository (sometimes also called scratch), which allows the subteams to share their work-in-progress via a central server that is, however, different from the main one (often outward-facing, if not public). Typically, each subteam would maintain one topic branch for itself, and a CI system would perform periodic octopus merges of all topic branches into one big integration branch, complaining about conflicts and build errors.

7
  • +1, never heard of a staging repository called as scratch but I'd imagine it comes from "starting from scratch" :-)
    – Spoike
    Commented Apr 30, 2011 at 6:05
  • @ReinHenrichs: can you keep cool and argue about why you disagree with rebasing
    – CharlesB
    Commented Apr 30, 2011 at 8:56
  • 2
    Sorry. The git bisect issue claimed doesn't exist. Git bisect can bisect into merge commits. A linear history becomes difficult to maintain as the number of developers (or topical branches) increases. Furthermore, by not branching and merging, you lose out on a very powerful workflow tool and one of the main benefits of using git in the first place. You don't have to "push between developers", you can set up a remote, public (within the developer team, at least) repository for each developer. It's easy to do so. What you're essentially describing is using git like svn. Commented Apr 30, 2011 at 15:55
  • "evil merges" are tidily prevented by running tests. The merge commits themselves provide useful metadata. I'm not sure what experience the OP has maintaining a git repository with a large number of topical branches. We tried the "rebase and flatten" strategy with an open source project with hundreds of topical branches and it crumbled under the complexity. We switched to a merge strategy and did away with all of the complexity while adding utility without suffering any of the supposed drawbacks. git bisect was given as a reason to keep the flat strategy then as well. Commented Apr 30, 2011 at 16:01
  • 1
    @ReinHenrichs The "evil merges" that mmutz was describing have nothing to do with git bisect alone. It happens when feature A changes a function that feature B also uses. All tests will pass in both A and B prior to the merge, but after the merge tests can break due to incompatible changes between A and B - but git bisect can't partially apply one branch to another, so its only clue is that the merge commit is when the bug was introduced.
    – Izkata
    Commented Mar 31, 2013 at 4:21
2

I'm currently in the progress of doing massive and long-time refactorings (converting an application from one to another GUI toolkit) and perform a successful rebase-centric workflow, because other team members continue to work on new features:

There are mostly two main branches, the master where the new features are developed and the toolkit-conversion branch. The most important rule is simple: do only things in the toolkit-conversion branch which are relevant for the conversion. Whenever there is something which can be done in the master (old GUI toolkit), I do it there and rebase my toolkit-conversion changes to the new master head. Another rule is to keep the toolkit-conversion branch quite short. Hence I often use reset, cherry-pick and amend-commit and rebase to glue belonging smaller commits to larger ones (which at the end have the same purpose). This also works fine when I've tried something which did not worked well to "undo" the change or after I've refactored some code with temporary helper code.

I've decided against merging changes from masterinto toolkit-conversion branch, because it would make it much harder to rebase earlier commits to keep the branch clean and easy to review. Also, merges might introduce conflicts whose resolutions are not that clear as when keeping a clean history.

Of course, this work-flow also has disadvantages. The most important one is, that it only works well for a single person. When ever I force-push the toolkit-conversion branch after having rebased it to the head of master, pulling it on another repository becomes difficult (automatically rebasing onto tracking branch often fails with conflicts).

At the end, my toolkit-conversion branch remains short, clean and easy to review. I could not imagine to have been doing this similar powerful with, e.g., SVN.

2

In the company I'm currently working, we've been applying a variation of this same branching model for a while. We've also been using scrum so we do a branch per story workflow.

The only issue we've had so far is when the team is big enough and more than one story can be started and those stories are dependent on each other, it becomes kind of a mess to merge changes between branches and back to master.

Besides that, this has proven to be trustworthy :).

1

I'm currently busy adapting this workflow. I think this is quite a good workflow, because it uses the branching model where git excels in.

The only small downside is that it takes some discipline in holding down to this workflow and not trying to take shortcuts.

The developers of kohana also use this workflow, and they seem to like it quite a lot.

0
1

Do you use this or a similar git branching workflow?

We use a similar workflow at work, but a little less complicated. It is however greatly inspired by this workflow, since I've read this article many times. I even have the pdf of the branching model printed in colors beside my desk :)

Do you consider this a productive approach?

Productive. How do you define productivity? Well, in my mind it's most important to have high quality, at least to try and achieve better quality all the time. Constantly improving the process etc. If you can produce quality code, productivity will benefit from it. So the question is really: Does this improve the quality of the software? And my answer to that is definitely yes.

What I love the most with this type of branching model, is that it introduces branches in different layers of quality. The more to the right in the picture, the higher stability and the higher quality. The master branch is holy and all commits on it should be regarded as stabile versions of the software. The more to the left you go, the more experimental and the lower the stability you get.

As soon as you test new features and bug fixes you can gradually transfer them from left to right and thus moving in code with high quality exactly when you know that the code meets the quality requirements that you demand of the code. Well, at least in theory, since you can't test everything to 100% and know for sure that the code doesn't contain any bugs, because it will always have bugs. But it enables you to keep a high confidence.

Nothing sucks more as a programmer, than to work in a system where no one has confidence in the code, because they know it just sucks and that there are a shit load of bugs in it.

Do you see any flaws with this approach? Any potential downside?

It's important to think through your branching model so that it fits well with the needs of your organisation. Just because this model works well for some people, doesn't necessarily mean it's optimal or desirable for another.

There are always trade offs and even in this case. One trade off is the number of branches versus complexity. By introducing a lot of different branch types, you increase the complexity of the workflow. For example it might be just wrong to always force people to create a new feature branch, when they are trying to fix a simple bug by changing a couple of lines of code.

We all know that bugs are more or less complicated to solve. So, when a trivial bug is discovered you might want to cut down on complexity and administration to get rid of extra overhead and just let people commit directly to e.g. the master or develop branch. But as the nature of your fixes turns more complicated, it's worth that extra overhead to create new branches for them. Especially if you are unsure of the size and length of it or if you want to improve collaboration between you and the other developers.

If you have a better approach, would you mind sharing, or providing a link to an article or discussion about it?

This is no doubt a good approach and it might fit most cases since most of us have similar developments processes, but it might not be suitable to everyone. I strongly urge you to think through how you handle your code right now, and try to create a branching model that fits the one you already have.

The most important point is to get started with git and the rest will follow naturally. Start simple and gradually improve! Be creative!

Cheers

Not the answer you're looking for? Browse other questions tagged or ask your own question.