155

Our development team has been using the GitFlow branching strategy and it has been great !

Recently we recruited a couple testers to improve our software quality. The idea is that every feature should be tested/QA by a tester.

In the past, developers work on features on separate feature branches and merge them back to the develop branch when done. The developer will test his work himself on that feature branch. Now with testers, we start asking this Question

On which branch should the tester test new features ?

Obviously, there are two options:

  • on the individual feature branch
  • on the develop branch

Testing On Develop Branch

Initially, we believed this is the sure way to go because:

  • The feature is tested with all other features merged to the develop branch since it's development started.
  • Any conflicts can be detected earlier than later
  • It makes the tester's job easy, he is only dealing with one branch (develop) at all time. He doesn't need to ask the developer about which branch is for which feature ( feature branches are personal branches managed exclusively and freely by relevant developers )

The biggest problems with this is:

  • The develop branch is polluted with bugs.

    When the tester finds bugs or conflicts, he reports them back to the developer, who fixes the issue on the develop branch (the feature branch were abandoned once merged ), and there could be more fixes required afterward. Multiple subsequence commits or merges (if a branch is recreated off develop branch again for fixing the bugs) makes rolling back the feature from the develop branch very difficult if possible. There are multiple features merging to and being fixed on the develop branch at different times. This creates a big issue when we want to create a release with just some of the features in the develop branch

Testing On Feature Branch

So we thought again and decided we should test features on the feature branches. Before we test, we merge the changes from the develop branch to the feature branch ( catch up with the develop branch ). This is good:

  • You still test the feature with other features in the mainstream
  • Further development ( e.g. bug fix, resolving conflict ) will not pollute the develop branch;
  • You can easily decide not to release the feature until it is fully tested and approved;

However, there are some drawbacks

  • The tester has to do the merging of the code, and if there's any conflict (very likely), he has to ask the developer for help. Our testers specialize in test and are not capable of coding.
  • a feature could be tested without the existence of another new feature. e.g. Feature A and B are both under test at the same time, the two features are unaware of each other because neither of them has been merged to the develop branch. These means you will have to test against the develop branch again when both of the features are merged to the develop branch anyway. And you have to remember to test this in the future.
  • If Feature A and B are both tested and approved, but when merged a conflict is identified, both of the developers for both features believe it is not his own fault/job because his feature branch past the test. There is an extra overhead in communication, and sometimes whoever resolving the conflict is frustrated.

Above is our story. With limited resource, I would like to avoid testing everything everywhere. We are still looking for a better way to cope with this. I would love to hear how other teams handle this kind of situations.

7
  • 6
    This question seems like it's a better fit for Programmers, since it doesn't deal with a programming problem, but rather a development process. Can someone migrate it?
    – user456814
    Commented Aug 22, 2013 at 5:05
  • 4
  • 2
    Our model is exactly the same. I'm interested in hearing about how your QA team reports issues on feature branches differently from issues in the field or issues during UAT process (if you have one). We use Atlassian JIRA and we have a different workflow for the two. Commented Sep 18, 2015 at 18:33
  • 2
    Deciding the same thing right now. Plus, as our environment is a java spring application, it takes around 20 minutes to build and deploy to test environment. Happy someone asked the same doubts I had.
    – digao_mb
    Commented Feb 25, 2016 at 12:31
  • The first drawback isn't inherent to the process of testing on feature branches. Tools like Github Enterprise and Bitbucket have the ability to require approval for pull requests and the person responsible for QA can approve signaling to the developer that they are free to merge into develop. Commented Dec 15, 2016 at 16:38

6 Answers 6

117
+50

The way we do it is the following:

We test on the feature branches after we merge the latest develop branch code on them. The main reason is that we do not want to "pollute" the develop branch code before a feature is accepted. In case a feature would not be accepted after testing but we would like to release other features already merged on develop that would be hell. Develop is a branch from which a release is made and thus should better be in a releasable state. The long version is that we test in many phases. More analytically:

  1. Developer creates a feature branch for every new feature.
  2. The feature branch is (automatically) deployed on our TEST environment with every commit for the developer to test.
  3. When the developer is done with the implementation and the feature is ready to be tested they reabse the develop branch on the feature branch and deploy the feature branch that contains all the latest develop changes on TEST.
  4. The tester tests on TEST. When they are done they "accept" the story and merge the feature branch on develop. Since the developer had previously rebased the develop branch on the feature one we don't expect too many conflicts. However, if that's the case the developer can help. This is a tricky step, I think the best way to avoid it is to keep features as small/specific as possible. Different features have to be eventually merged, one way or another. Of course the size of the team plays a role on this step's complexity.
  5. The develop branch is also (automatically) deployed on TEST. We have a policy that even though the features branch builds can fail the develop branch should never fail.
  6. Once we have reached a feature freeze we create a release from develop. This is automatically deployed on STAGING. Extensive end to end tests take place on there before the production deployment. (ok maybe I exaggerate a bit they are not very extensive but I think they should be). Ideally beta testers/colleagues i.e. real users should test there.

What do you think of this approach?

35
  • 3
    How do we make sure that feature1 and feature2 which were tested independently are also good to go together (as mentioned in the question)? Commented Jul 15, 2014 at 13:22
  • 2
    we do indirectly, by merging one and then the other one to develop. It is step 4 of the process above and it has to do with chronological order. So if feature 2 is ready to be merged but feature 1 was already merged, the feature 2 developer and tester have to make sure that their merge will work.
    – Aspasia
    Commented Jul 15, 2014 at 13:31
  • 1
    I think anyway according to this git branching model you are not supposed to merge two feature branches with each other.
    – Aspasia
    Commented Jul 15, 2014 at 14:25
  • 14
    Do you have a complete TEST Environment (DB, Server, Client, etc) for each feature branch? Or do they share the Environment and just have different names (e.g. app-name_feature1- app-name_feature2, etc.)
    – hinneLinks
    Commented Aug 10, 2015 at 7:46
  • 5
    So this model still doesn't allow for concurrent testing of features to occur without having to re-test after every merge to develop?
    – Oletha
    Commented Sep 9, 2016 at 15:18
44
+100

Before test, we merge the changes from the develop branch to the feature branch

No. Don't, especially if 'we' is the QA tester. Merging would involve resolving potential conflicts, which is best done by developers (they know their code), and not by QA tester (who should proceed to test as quickly as possible).

Make the developer do a rebase of his/her feature branch on top of devel, and push that feature branch (which has been validated by the developer as compiling and working on top of the most recent devel branch state).
That allows for:

Each time the tester detects bug, he/she will report it to the developer and delete the current feature branch.
The developer can:

  • fix the bug
  • rebase on top of a recently fetched develop branch (again, to be sure that his/her code works in integration with other validated features)
  • push the feature branch.

General idea: make sure the merge/integration part is done by the developer, leaving the testing to the QA.

25
  • Are you saying "don't use merge, use rebase instead"? If so, I'm confused, given the Git FAQ on the difference between the two: git.wiki.kernel.org/index.php/… Commented Sep 21, 2013 at 3:54
  • 1
    @VickiLaidler yes, if the feature branch is rejected by QA, the developer has to rebase, not merge (stackoverflow.com/a/804178/6309)
    – VonC
    Commented Sep 21, 2013 at 8:06
  • 1
    @VonC I completely agree but there are some issues: 1) Deleting the branch impacts other tooling, like Stash Pull Requests (deleting branch closes the PR). Prefer force pushing. 2) If it's a big feature branch where during its lifetime two people collaborated, merges would have been preferred over rebasing. Rebasing it at the end creates conflict nightmare as the merge commits will be removed, and if code depended on those merge changes, it is non trivial to fix Commented Sep 18, 2015 at 18:31
  • 1
    Looking back at my answer I would also do a rebase and not a merge for cleaner history.
    – Aspasia
    Commented Oct 13, 2015 at 8:59
  • 1
    @Aspasia Good points. I have included pull-requests in the answer for more visibility.
    – VonC
    Commented Jul 27, 2017 at 11:12
15

The best approach is continuous integration, where the general idea is to merge the feature branches into the developer branch as frequently as possible. This reduces on the overhead of merging pains.

Rely on automated tests as much as possible, and have builds automatically kick off with unit tests by Jenkins. Have the developers do all the work with merging their changes into the main branch and provide unit tests for all their code.

The testers/QA can take participate in code reviews, check off on unit tests and write automated integration tests to be added to the regression suite as features are completed.

For more info check out this link.

1
  • You can still do CI with branches + rebasing in Git. Commented Sep 18, 2015 at 18:32
11

We use what we call "gold", "silver", and "bronze". This could be called prod, staging, and qa.

I've come to call this the melting pot model. It works well for us because we have a huge need for QA in the business side of things since requirements can be hard to understand vs the technicals.

When a bug or feature is ready for testing it goes into "bronze". This triggers a jenkins build that pushes the code to a pre-built environment. Our testers (not super techies by the way) just hit a link and don't care about the source control. This build also runs tests etc. We've gone back and forth on this build actually pushing the code to the testing\qa environment if the tests (unit, integration, selenium ) fail. If you test on a separate system ( we call it lead ) you can prevent the changes from being pushed to your qa environment.

The initial fear was that we'd have lots of conflicts between this features. It does happen were feature X makes it seem like feature Y is breaking, but it is infrequent enough and actually helps. It helps get a wide swath of testing outside what seems is the context of the change. Many times by luck you will find out how your change effects parallel development.

Once a feature passes QA we move it into "silver" or staging. A build is ran and tests are run again. Weekly we push these changes to our "gold" or prod tree and then deploy them to our production system.

Developers start their changes from the gold tree. Technically you could start from the staging since those will go up soon.

Emergency fixes are plopped directly into the gold tree. If a change is simple and hard to QA it can go directly into silver which will find its way to the testing tree.

After our release we push the changes in gold(prod) to bronze(testing) just to keep everything in sync.

You may want to rebase before pushing into your staging folder. We have found that purging the testing tree from time to time keeps it clean. There are times when features get abandoned in the testing tree especially if a developer leaves.

For large multi-developer features we create a separate shared repo, but merge it into the testing tree the same when we are all ready. Things do to tend bounce from QA so it is important to keep your changesets isolated so you can add on and then merge/squash into your staging tree.

"Baking" is also a nice side effect. If you have some fundamental change you want to let sit for a while there is a nice place for it.

Also keep in mind we don't maintain past releases. The current version is always the only version. Even so you could probably have a master baking tree where your testers or community can bang on see how various contributors stuff interact.

4

In our company we cant use agile development and need approval for every change by business, this causes a lot of issues.

Our approach for working with GIT is this;

We have implemented "Git Flow" in our company. We using JIRA and only approved JIRA Tickets should be go to production. For Test approval we exted it with a created a seperate Test-Branch.

Steps for processing a JIRA Tickets are:

  1. Create a new Branch from Develop-Branch
  2. Do the code Changes on the Feature-Branch
  3. Pull from Feature the Changes to the Test/QA Branch
  4. After business approval we pull the change from feature branch into develop
  5. The develop goes frequently in a release and then finally master branch

Splitting each request in an own feature ensures, only approved changes went to production.

The complete process looks like this: enter image description here

6
  • So this process only allows for a single feature to be tested at once? That seems like a huge bottleneck for a big dev team.
    – Kane
    Commented Oct 21, 2020 at 12:09
  • You can test multiple features in a testsystem merging them, both in the "test" branch. And if you have multiple test systems, you can create multiple "test-branches" Commented Oct 22, 2020 at 7:58
  • That's the issue ... having multiple "test systems". Nobody ever talks about that bottleneck. Also, testing multiple features in a single test branch is risky ... as you don't know what impact feature A will have on feature B.
    – Kane
    Commented Oct 23, 2020 at 14:20
  • You are correct, deciding to have multiple test branches is risky and should considered carefully. The use is, if you have long term changes to be tested seperatly to short term bug fixes. Updating the Test systems frequently from Develop is important to do, after a release, or you may get in trouble later, when the larger features go live (like a new year release or something). But for long term changes, since they have usually larger impacts, to make an regression test also on the Release branch before go live. Commented Oct 30, 2020 at 15:22
  • 1
    @ChristianMüller your approach is very interesting. I was just curious on knowing how do you handle database scripts considering the differences between feature branch and develop branch?
    – Felicity
    Commented Feb 16, 2021 at 15:48
1

I would not rely on manual testing alone. I would automate the testing of each feature branch with Jenkins. I setup a VMWare lab to run Jenkins tests on Linux and Windows for all browsers. It's truly an awesome cross browser, cross platform testing solution. I test functional/integration with Selenium Webdriver. My selenium tests run under Rspec. And I wrote them specially to be loaded by jRuby on Windows. I run traditional unit tests under Rspec and Javascript tests under Jasmine. I setup headless testing with Phantom JS.

Not the answer you're looking for? Browse other questions tagged or ask your own question.