128

We are a small software company with one product.

We use scrum, and our developers choose the features they want to include in each sprint. Unfortunately over the past 18 month period, the team haven't once delivered the features they committed to for a sprint.

I've read a lot of posts/answers stating something along the lines of "software is done when it's done, no sooner, no later... it does not help to put pressure on the team, to put more people on it, ..." I've received similar feedback from one of the developers upon my question how we can improve the success rate of sprints. Oh, and yes we do use retrospectives.

My question is basically:

When is it fair to look for the problem in the quality of the developers?

I'm starting to think that if you get to choose your own work/features and still fail each sprint either: - You are not able to oversee the complexity of your own code; - or the code is so complex that no one can oversee the complexity.

Am I missing something?

20
  • 51
    Why is your team not meeting the Sprint Goals? Are you completing any User Stories (or however you capture the features you are implementing) to the satisfaction of the Definition of Done? Are you adjusting your Velocity for the upcoming Sprint based on the previous Sprint's Velocity? Is your Product Owner prioritizing the Product Backlog? Is the Product Owner available to answer questions? What is happening at the Sprint Retrospectives?
    – Thomas Owens
    Commented Mar 23, 2016 at 15:55
  • 20
    Other possible reasons include: the features are poorly defined or the definition is changing during the sprint. The developers feel pressure to take on more than they can handle (simply saying they can choose doesn't eliminate this possibility.) You need to look at why they aren't finishing. Does being 'done' for that feature require other teams?
    – JimmyJames
    Commented Mar 23, 2016 at 18:51
  • 77
    So let me get this straight. You're constantly, consistently setting goals that are beyond the team's realistic ability to meet. You've known that this is happening for 18 months, but you keep setting unachievable goals, and now you think it's the team's fault for not meeting them? Einstein's famous definition of insanity springs immediately to mind. Commented Mar 23, 2016 at 20:18
  • 34
    If " The developers do not choose what goes into a sprint", you do not do scrum.
    – user53141
    Commented Mar 23, 2016 at 21:55
  • 10
    The terminology has changed. Agile teams no longer commit to a sprint, they forecast it. And just like a weather forecast, what you expect next week and what actually happens can change. scrum.org/About/All-Articles/articleType/ArticleView/articleId/…
    – Andy
    Commented Mar 24, 2016 at 0:37

16 Answers 16

153

You should first ask, 'who cares'?

Completing sprints feels good, and in some companies results in cookies from the scrum parent. But the ultimate test is whether the company is meeting its goals.

The above is facetious. If the company is succeeding while never completing the planned content of a sprint, you might as well use Kanban instead: you sort the backlog, you work on what's most important, and you don't worry so much about defined iterations.

One of the values of defined iterations is to drive process improvement (or drive out underperformers in some mentalities). You're not getting that now. So, you can either adopt the rest of the process that improves the process (and eventually completes sprints), or you can decide that you like what you have.

9
  • 54
    I agree and I personally find the idea of 'committing' in scrum to be inefficient. You are forced to structure your work around an arbitrary timeline in order to make this work. Effectively you end up with a bin packing problem. The only realist way for everyone to finish what they commit every Sprint is to commit to less than what they can accomplish in an average Sprint. I like to use the Sprint schedule for reassessing direction and house-keeping. Nothing more.
    – JimmyJames
    Commented Mar 23, 2016 at 18:56
  • 29
    Which is why scrum.org changed their terminology from "commitment" to "forecast" in 2011. Commented Mar 23, 2016 at 23:34
  • 6
    I like this answer, but I'd add that sprints with a time-based forecast can be a useful way to balance the velocity-based development process with external time-based business needs. If you can maintain a reputation for reasonably reliable time-based forecasts for sprints, you can use that to communicate your plans to business owners and justify the timing and prioritization of tasks based on business priorities. Of course, if your forecast has never been right in 18 months, your reputation is worse than the weatherman. Whether it's worth improving your forecasts or giving up is up to you. Commented Mar 24, 2016 at 1:58
  • 5
    I've worked for a company which was succeeding while never completing the planned content of a sprint, and we DID switch to Kanban instead. That made everything a lot smoother. Commented Mar 25, 2016 at 2:48
  • 1
    @SteveJessop, wow, they sure haven't publicized that very well. None of the "scrum masters" I've worked around for the past five years have ever mentioned it. Maybe they intentionally haven't mentioned it.
    – Kyralessa
    Commented Apr 14, 2016 at 18:23
129

Am I missing something?

YES!

You went 18 months - or somewhere in the neighborhood of 36 sprints with retrospectives, but somehow couldn't fix it? Management didn't hold the team accountable, and then their management didn't hold them accountable for not holding the team accountable?

You are missing that your company is pervasively incompetent.

So, how to fix it. You (the dev) stop committing to so much work. If the stories are so big that you cannot do that, then you need to break the stories down into smaller chunks. And then you get to hold people accountable for getting done what they say they will get done. If it turns out they can only get a tiny feature done each sprint, then figure out why and make it better (which may involve replacing the developer). If it turns out they can't figure out how to commit to reasonable amounts of work, you fire them.

But before that, I would look at management that let things go for that long, and figure out why they're not doing their job.

15
  • 30
    A "small software company with 1 product" probably doesn't have multiple levels of management, and quite possibly the existing managers don't have formal education in management. Commented Mar 23, 2016 at 17:01
  • 46
    I do not find that hard to believe at all. Most likely the failure to meet sprint goals doesn't cause acute problems because features are still being delivered fast enough for the business side to work reasonably well, maybe because the product doesn't have much competition in its niche and sales don't depend on promising new features and delivering them on time. Commented Mar 23, 2016 at 17:29
  • 9
    @Orca: In 18 months, you should've been able to cut down the size, scope and number of stories to the point where you achieved some success. I would think 3 sprints is a reasonable amount of time to figure out the smallest pieces of work you can accomplish in a sprint. Once you achieve that, use what you've learned and build up slowly. Build up of the competencies of the team you have. and remember: This is a team sport, not just the developers, but the scrum master, the folks responsible for the product and feature descriptions, QA, etc. all need to work on the solution. Commented Mar 23, 2016 at 19:15
  • 31
    Having worked in a one-product-shop before, there is more pressure to "fill the bucket" than there is in a bigger place with different and shifting priorities. It's possible the devs are afraid to say no even though the things that should go in plus the 'flavor of the month' things from management are more than they can deliver on. It takes a lot of guts to tell the CEO no, no matter the size of the company.
    – corsiKa
    Commented Mar 23, 2016 at 19:23
  • 14
    The thing is, "success" in creating a product is never measured in terms of how much spare time you had at the end of a fortnight. If at the end of each sprint, you delivered working software, then the excess stories you planned into the sprint are irrelevant. They'll be picked up next sprint, so what! You're defining your team's success solely by how well they are fitting to the bureaucracy of the methodology. That is not Agile. @bmarguiles has it right - scrum is a guide, not holy scripture.
    – gbjbaanb
    Commented Mar 24, 2016 at 11:52
68

I'd like to suggest you to make a small change and try Kanban for couple of weeks instead of Scrum. It may work better for your team.

While Scrum drives productivity by limiting the work time available in a sprint, Kanban drives productivity and velocity by limiting the number of active, concurrent issues. Time estimation is no longer part of the process. (source)

In a nutshell, what is Kanban?

Kanban is also a tool used to organize work for the sake of efficiency. Like Scrum, Kanban encourages work to be broken down into manageable chunks and uses a Kanban Board (very similar to the Scrum Board) to visualize that work as it progresses through the work flow. Where Scrum limits the amount of time allowed to accomplish a particular amount of work (by means of sprints), Kanban limits the amount of work allowed in any one condition (only so many tasks can be ongoing, only so many can be on the to-do list.)

How are SCRUM and Kanban the same?

Both Scrum and Kanban allow for large and complex tasks to be broken down and completed efficiently. Both place a high value on continual improvement, optimization of the work and the process. And both share the very similar focus on a highly visible work flow that keeps all team members in the loop on WIP and what’s to come.

See rest of the detail from this link

2
  • 3
    Would Downvote (damn, to little rep). In my opinion Kanban requires more discipline compared to scrum since there's no time box. Since the team "suffers" for months without any improvement it seems either to be unable to break down stories into smaller chunks (kwnow what they can do within a definite time period) or is even incompetent. Kanban will probably make things even worse since there's no finish line. And regarding the cite "Kanban drives productivity and velocity by limiting the number of active, concurrent issues." - Scrum has this contraint too: complete one story after another. Commented Mar 25, 2016 at 10:32
  • 2
    yes, the key here is to try kanban for a few months.
    – Fattie
    Commented Mar 28, 2016 at 16:17
61

My question is basically: when is it fair to look for the problem in the quality of the developers

There isn't enough information in your post to answer that question. There's no way to know if they are failing because they are incompetent, or failing because they commit to doing more work than is reasonable.

If I'm an incredibly gifted developer, on a team of incredibly gifted developers, and we fail to finish X stories in two sprints (or 36!), are we incompetent? Or, do we just suck at estimation? That depends on if the stories were "create a login screen" or "send a man safely to mars".

The problem starts with bad stories and/or bad estimates

Estimation is hard. Really hard. Humans suck at it, which is why Scrum has us break work down into blocks that shouldn't take more than a day or two, and to assemble small groups of those blocks that we're certain we can complete in a short period of time. The bigger the blocks, and the longer the period of time, the less accurate our estimations are.

What are your stores like? Are they well written, with good acceptance criteria? Are they each small enough to do in just a few days? Without well written stories (which is the fault of the whole development team, including the product owner), the team can't be expected to do good estimation.

The problem is compounded by bad retrospectives

What you're doing wrong, seemingly, is that you aren't taking advantage of retrospectives. You've gone 18 months without solving this problem, so either the team isn't noticing the problem, or are failing to address it in their retrospectives.

Does each retrospective end with at least one action item for the team to take, in order to do better on the next sprint. Does each retrospective include talking about the action items from the previous sprint to see if they were done and if they were effective?

The solution isn't to place blame, it is to learn

The first step should be to stop looking for blame, and instead, start working to improve the team. Your team is probably not incompetent, just bad at estimation and planning. Force the team to finish a sprint, even if that means they pick a single story and finish a week early. If they can't do that, then either they are incompetent, or the stories are simply too complex. The answer to that question should be obvious.

Once they are able to finish the one story, they will know with reasonable certainty that they can do X amount of story points in a sprint. Simple math will help answer the question of whether they can do more stories or not.

Continuous improvement is the solution

Once they finish one story in a sprint, it is time to see if they can do two. Lather, rinse, repeat. When they start failing the sprint goals, you've found the limit to their estimation abilities. Go back to the number of story points from the previous story and stick to that for a while.

At all times, take the retrospectives seriously. If they didn't finish a sprint, figure out why and act on it. Did they have too many unknowns? Do they have the wrong mix of skills? How good were their estimates? If they estimated a story to be X points, did it require a relatively equal amount of work as priory stories worth X points? If not, use that to adjust the points of stories going forward.

1
  • 4
    +1 the goal should not be to assign blame but to learn/improve.
    – David
    Commented Mar 27, 2016 at 16:28
17

You say you "use retrospectives." But what does the team actually do in these retrospectives? Since you've gone 18 months without once addressing this aspect of your process, I'm guessing the answer is: nothing very useful.

To me, the retrospective is the most important part of the process. Throw out or change anything else about scrum all you want (by mutual agreement of the team during a retrospective, of course), but commit to regularly taking the time to talk about how the process is working for everyone, share what worked and what didn't work, and propose ideas to improve. Keep trying to improve your process little-by-little every sprint, and sooner or later, you can have something that works pretty well.

In your case, that process doesn't seem to be working. Since sprint goals aren't being met, it's prudent that a retrospective focus on why this is the case. Obviously, the team took on too much work for the sprint. But why did they do that?

  • Did they underestimate the complexity of the work?
  • Did management pressure them to take on more work than the team thought it could handle?
  • Did they have too many interruptions/emergencies that took resources away from completing the planned work?
  • Did they experience bottlenecks that delayed completion of the work (say, waiting for assets from an external design team)?
  • And even: were one or more team members incapable of doing the work at all?

These are the kind of questions the team should have been asking themselves every sprint for the past 18 months. Then, armed with the answers, they can propose suggested process improvements to trial for the next sprint. These might include:

  • Take on less work in the next sprint (duh!)
  • Be more conservative in estimates
  • Tell whoever is pressuring us to do more work to sod off, as we're already taking on more than we can accomplish right now
  • Manage interruptions better and adjust the amount of work in the next sprint to accommodate unavoidable emergencies
  • Fix the bottlenecks or plan around the ones you can't avoid
  • Don't assign stories to team members who cannot accomplish them (and separately, figure out the management response to address a situation with a poor-performing team member, from training and mentorship to dismissal)

This is the kind of conversation that should have happened every single sprint for the past 18 months. It's not about putting pressure on the team or adding more resources, but about using your retrospectives to improve your process on a continuous basis. That clearly isn't happening here.

You would think that by the 15th sprint with missed goals, the team would have discussed this in their retrospective so many times, to the point where they decided to just take on the most minimal sprint goals possible just for the sake of getting one complete. By the 25th uncompleted sprint, I'd just do a sprint with a single string change and nothing else. If the team can't manage that in a sprint, the problems are even worse than you let on.

To be clear, as several here have pointed out already, sprint goals are forecasts, not iron commitments, and missing goals is not itself indicative of anything other than making inaccurate forecasts. A great team can miss tons of goals because they're bad forecasters, while a awful team can meet every one and not deliver any actual value. But if your forecasts are wrong in the same direction for 18 months in a row, that part of your process is not working. Use your retrospectives to fix the process so your forecasts are reasonably close to the actual reality of what the team can deliver each sprint.

2
  • Expect that, for the single string change, the devs will have to set up a new module test environment, figure out how it is to be configured (if not touched for a year or two), fight their way through legacy spaghetti code, see other parts not compiling/working with it anymore, then, when it has finally been changed and tested on the desktop, the automated build fails for some reason, taking half a day or a day to figure out why.
    – Erik Hart
    Commented Mar 24, 2016 at 23:24
  • 2
    @ErikHart That sounds like a whole bunch of separate things that are already broken up, and should be when doing time estimates and planning. Commented Mar 28, 2016 at 20:13
5

"software is done when its done, no sooner, no later."

This is true, but for each task that your developers begin working on, does everybody in your organisation understand the Definition of Done for each task?

It seems that one of your biggest issues is Estimation, but developers can only provide a realistic estimate when they have an unambiguous and clearly-specified 'definition of done'. (Which includes company process issues - e.g. user documentation, work packages on a formal release, etc.)

It's not surprising that over-estimation is causing a problem, given that most developers see that estimating the time required to complete a task is the most difficult they're be asked.

However, most developers tend to have a reasonable (albeit optimistic) handle on the amount of effort they are able to put in, for a given period of time.

The problem often is that developers struggle to create a relationship between a task and the total amount of effort required when they're dealing with incomplete information - particularly if they are pressured to come up with all the answers up-front to a really huge task.

This naturally leads to time estimates becoming disconnected from reality, and they lose sight of things like the build process and user documentation.

The disconnect tends to start at the very beginning when the task is described; and it's usually compounded by a non-technical person drawing up a list of requirements without having any clue of the amount of effort needed.

Sometimes people in senior management specify tasks and completely ignore company process issues; it's not uncommon for senior management to think that things like defining tests, or creating a formal released build, or updating a user document just happens magically with no time or effort. required.

Sometimes projects fail before a developer has even written a line of code because somebody, somewhere is not doing their job correctly.

If the development team are not involved in agreeing requirements or capturing acceptance criteria, then that's a failure of management - because it means that someone who has an insufficient understanding of the code and the technical issues has committed the business to an incomplete set of requirements, and left the project open to misinterpretation, scope creep, gold plating, etc.

If the development team are involved in capturing and agreeing requirements, then it could be a failing of the team, who are responsible for clarifying the details (and the acceptance criteria - i.e. "What does the deliverable look like? when is it done?"). The development team are also responsible for saying NO when there are other blocking issues in the way, or if a requirement is just unrealistic.

So if the developers are involved in the capture of requirements:

  • Does the team have an opportunity to sit down with the product manager to clarify the requirements/definition of done?
  • Does the team ask sufficient questions to clarify implied/assumed requirements? How often are those questions answered satisfactorily?
  • Does the team demand Acceptance criteria (definition of done) before providing an estimate?
  • How well are Acceptance criteria usually captured for each task? Is it a vague document with sparse detail or does it describe tangible functionality, and wording that a developer could unambiguously translate into a test?

The chances are that the productivity of your team is not an issue; your team is probably putting in all the effort they are able to put in with regards to development. Your real issues could be one or more of the following:

  • Incomplete and vague requirements.
  • Requirements/tasks which are just too big in the first place.
  • Poor communication between the development team and upper management.
  • A lack of clearly-defined acceptance criteria before the tasks are handed to the team.
  • Incomplete or vague/ambiguous specification of acceptance tests. (i.e. Definition of Done)
  • Insufficient time allocated to defining/agreeing acceptance criteria.
  • Developers didn't consider time to test existing baseline code or fix existing bugs
  • Developers did test the existing baseline code but did not raise the bugs as Blocking Issues before providing estimates on the requirements
  • Management saw the issues/bugs and decided that bugs do not need to be fixed before writing new code.
  • Developers are under pressure to account for 100% of their time, even though possibly 20% (or some similar number) of their time is probably taken up by meetings, distractions, emails, etc.
  • Estimates are agreed at face-value and nobody adjusts room for error or contingency (e.g. "We decided this should take 5 days, so we'll expect it done in 8.").
  • Estimates are treated by everybody (developers and management) as single numbers instead of a realistic "range" numbers - i.e.
    • Best case estimate
    • Realistic estimate
    • Worst-case estimate

... the list could go on a lot longer than that.

You need to do some "fact finding" and figure out exactly why the estimates are consistently disconnected from reality. Is the existing baseline software bad? Does it lack unit test coverage? Do your developers avoid communication with management? Do management avoid communication with developers? Is there a disconnect between the management expectations and developer expectations when it comes to "Definition of Done"?

4

My advice to reboot the team is to pick the smallest story possible per team, per sprint, and complete that one story, and that one story only!

I agree with the other posters, either the team is incompetent, or they are trying to do too much stuff.

Start with the smallest stuff, the most pared down stories, and complete a single sprint. Get the team to finish a sprint and be successful, and it will help them to see how to prioritize their time and work commitments. Over time the team will be able to take on more and more work until they get to their peak productivity.

4

You should be collecting data and building confidence levels based on past performance.

http://www.leadingagile.com/2013/07/agile-health-metrics-for-predictability/

Simplest example is with constant time sprints, such as every two weeks. Estimate how many story points the team will finish within the two weeks. Then after the two week sprint is over, see how many story points were actually completed. Over time, you might see you estimate 15 points, but only finish 10. In that simple case, you can start moving forward with a velocity adjustment so you only plan 10 points per sprint. Or that you plan to finish 66% of estimated work.

By building confidence levels with standard deviations, you can tell management: according to current project goals, we expect only 50% confidence we can finish in 3 weeks, but 95% confidence we can finish in 5 weeks.

3

The idea behind Agile and Scrum is to build in a tight feedback loop so that you can gauge your process. You have to ask "Where did that break down?", since it seems to have broken down completely.

  1. Plan what you are going to do and make a list of it
    • This should consist of picking items from a backlog of items that need to be completed. Before anything is pulled into the to-do list for the sprint, the team needs to agree that they fully understand it and that they roughly estimate it will take less than a sprint to complete.
    • Ideally, the backlog is ordered by priority (to the business) and you can pull in priority order.
    • If items from the backlog are too big, break them down into smaller chunks. Then break the chunks down into individual tasks which can be completed in a day or less.
    • Don't expect this planning to be easy or quick.
  2. Execute on items from the list for a defined period of time (a sprint)
  3. Review what you accomplished
    • What stories were finished?
    • If no stories were finished, then what tasks making up stories were finished?
    • If no tasks were finished, then what exactly did everyone do last Monday? Last Tuesday?, etc. - at this point, it's time for severe introspection...
  4. Troubleshoot the problems (analyze feedback and adapt)

    • How long did the things that got finished take?
    • What prevented tasks from getting completed?
    • Are team members breaking down stories (features) into tasks that can be completed in 1 day or less? If not, do this and make it part of the to do list.
    • What changes to the task list or the items on the task list were made during the sprint? Was this a cause for not finishing? If it is, don't change the list, or the features. Throw the changed feature on the backlog until it's stable.
    • How can you reduce the size and scope of a few items to something that can be finished in a sprint? Pick something tiny like a logging improvement, a simple bug fix, a typo, just to get some things finished to let the team get a gauge of what they can do. If you can't do this, then stop sprinting and re-plan.
  5. Back to step one and repeat until release...

Are there documentation obstacles, coupling problems creating dependencies, communication problems, not enough information in the requirements?...What? Did the developers spend their time trying to learn new technologies? Did they spend huge amounts of time in design? Were things like learning accounted for in the sprint task list?

Did you think your team think they had isolated their problems each retrospective? Did the team act to correct the problems. Did the team not respond and management simply dictated the solutions and course of action?

Given the long time span, something is wrong systemically, not simply with the developers. At some point (before a year was up) someone on the team (including the scrum master) should've asked, what, however small, can we accomplish?

2

In your situation retrospectives are too late.
Are you holding daily stand-up meetings and truly getting status from people about what they did in the previous 24 hours?
Is the scrum master using those meetings to measure each developer's progress against their goals?
You need to use that piece of the Scrum methodology to monitor the progress as you go. It should give you good insight into what people are doing.
Are they distracted? Spending too much time on coffee, or on helping other people on SE/SO, or reading the news, or doing inspections that aren't accounted for? Or are they really head-down, full-steam ahead and thoroughly over-committed? The daily view should give you a good idea. It will also help to keep devs focused on the task at hand, so they don't have to admit they did nothing yesterday.
And of course if they report steady progress all through the sprint and still don't deliver at the end, then they were lying and it might be time for a new developer.

3
  • this post is rather hard to read (wall of text). Would you mind editing it into a better shape?
    – gnat
    Commented Mar 24, 2016 at 19:43
  • 1
    @gnat It hardly seems necessary to protect the question just because I failed to format my answer nicely enough for you. That does not make it a low-quality answer and it certainly isn't spam. Down-voting for formatting issues from a newbie is pretty heavy-handed too. I raised a good point since no one else mentioned evaluating progress in mid-sprint. Try upvoting it for the content instead of being picky.
    – Sinc
    Commented Mar 24, 2016 at 20:59
  • 1
    @Sinc: you don't have any way to know who down-voted your answer. You shouldn't assume it was the first person to make a comment. Many of us will make comments without making a vote, and visa versa. A good answer is more than just factual information -- it needs to be easy to read and clean in the message it is trying to convey. Few people are willing to read an answer that is a single dense paragraph, and if nobody is willing to read the answer or if it is hard to understand, it's not a useful answer. When you write an answer, use it as an opportunity to hone your technical writing skills. Commented Mar 24, 2016 at 22:14
2

Estimating the effort and time required to complete a complex task, such as programming code, is difficult. As Joel Spolsky puts it:

Software developers don’t really like to make schedules. Usually, they try to get away without one. “It’ll be done when it’s done!” they say, expecting that such a brave, funny zinger will reduce their boss to a fit of giggles, and in the ensuing joviality, the schedule will be forgotten.

However, companies need deadlines in order to operate. As Joel suggested, try using Evidence Based Scheduling which will yield time estimates with associated probability, which management can relate to as any type of risk.

2

Scrum does a few things.

First, it encourages prioritization. The supplier of work has to say what they want to be done first, and not say "everything is equally important".

Second, it generates somewhat usable product even if not everything is finished. That is the point of having a "potentially shipable product" at the end of each iteration.

Third, it gives a tighter feedback loop. By insisting that things be "done" at the end of a sprint, you avoid the "90% feature complete, but only half way done" problem; when pushing for deadlines, you can shove things that need to be done aside so it looks like you almost hit the deadline, or you can fake it. By having a definition of done, and insisting on things being done, you know if something is harder than it looks earlier instead of later.

Forth, it avoids inventory by moving detailed planning close to doing the work. Planning things far out is a form of inventory: capital spent on resources that isn't available for sale to or immediate use by customers. Such inventory can rot (plans change underfoot, new information makes it obsolete), misalign with needs (turns out we don't need a distributed network whatzit, because the thing using it wasn't worth it), and reduce value of shipped goods (if in the last year half of your time was spent on planning for next year and beyond, you could have gotten twice as much shipped if you instead worked on stuff to be ready for now). If you can move planning closer to execution without loss (tricky!), you can decrease inventory.

It isn't the only way to solve these problems. You seem to be using scrum where it provides a stream of work to developers to work on for each period of time, with periodically adding new work to do, and checking on progress.

This is a useful way to use scrum-esque patterns. It keeps work flowing, it keeps planning close to production, it provides some feedback loops, etc. It even has advantages in that it doesn't warp development and testing to match the system (if testing is best done with the work is basically finished, trying to get things finished and tested within the same sprint forces the back-end of the sprint to not involve new development!)

The failure to put exactly what they are going to do into a sprint it not evidence that your developers aren't doing great work. It means they aren't following SCRUM from on high, instead using parts of the framework.

If they had halved (or quartered) how much they committed to each sprint, but kept everything else the same, then they would have finished more than they had committed to each sprint! You would have the same amount of code produced. Clearly the "sprint failures" aren't the important part, because that is just an internal process detail. The goal of the company is to get shit done, and that shit be good; not to follow some specific process, unless your goal is a certain kind of ISO process certification.

The process exists subservient to the goal of the stuff done.

On the other hand, because they are not following the rules of SCRUM, you aren't getting the same kind of feedback. You should examine the resulting stuff to see if the kind of flaws produced are the flaws that SCRUM was designed to deal with; are there stories that live on like zombies forever, and only get killed way late? Are there stories that seem easy, they explode, and in a retrospective where not worth the total work? Is product actually shipable at the times you need/want to ship it?

1
  • Mostly the point I was going to make. There isn't enough information to know if "the team haven't once delivered the features they committed to for a sprint." is a problem. If most, or the most important, features are being done then there is nothing necessarily wrong with over committing. I prefer scrums that consistently over or under commit to those that are more random. A team that always meets exactly their commitment is probably worth closer investigation.
    – itj
    Commented Feb 17, 2017 at 13:47
1

Oh, and yes we do use retrospectives.

Oh good, so you know why your team is failing right? You've had 36 opportunities to talk about what did and did not work, so the scrum masters should fully understand how to solve the issues, right?

I have a hunch, by the description you give, that your company has fallen into the "SCRUM makes us productive" mentality. The truth is SCRUM does not make your productive. Rather, it is a tool to help you make yourself productive in a way that identifies realities of development that are oft overlooked by management and developers alike.

What has the scrum master identified as potential issues with the team? Are they constantly assigning twice as much work as they can handle? If so, the scrum master should be gently suggesting they take on less work, because the scrum master can look at the team's velocity.

When is it fair to look for the problem in the quality of the developers?

The time where one should be looking for the problem in the quality of the developers is the moment you are certain that is the problem. This is not a new issue created by SCRUM. This is the reality of business. SCRUM should give you vastly more information about the capabilities of your team members than traditional approaches do. You should know whether the issue is "software developers aren't good enough" versus "management expectations are unrealistic" to a far better degree than you would understand it with a traditional approach. This is the time for management to do what management does best: figure out the right people for the job, so the company can make money. If you can't tell where the problem is, then imagine how hard it would be to tell without all those retrospectives!

If you think the people might be good enough (implying their hiring was not a mistake on management's part), then my advice would be to start thinking outside the box. If the work isn't getting done, consider changing the shape of the work for the developers. One of the easiest ways I've found to make sprint completion deadlines is to adjust the DONE criteria so that you will be happy with the result, no matter how it gets done. Thus being completed becomes a tautology.

This puts the onus on management, especially the SCRUM master. The trick is to write tasks which, if completed, are very valuable, but if left incomplete they still provide enough value added to the company to have been worth their paycheck. After 18 months, I would expect your retrospectives to have taught you something. If they haven't, perhaps you should write the stories with the explicit intent of failed stories unearthing something that is wrong in your company and bringing it to light. That would provide the company with immense valuable data, given how much frustration the company seems to have with the development team. The problem may indeed be the developers, as you ask. Or the problem may be a pathology in the mindset of the company that you had no idea about until now!

If indeed the issue is with the company, not the developers, the information you glean from these incomplete stories may indeed be worth more than the product you collect from the successful ones! It may be the information that saves your entire company! That seems like really valuable information to me, and you can use SCRUM as a tool to help you gather it.

0

"Software is done when its done, no sooner, no later" is a recipe for failure if you've not defined what "done" looks like.

Most engineers will try to produce the best possible solution, but this can easily lead to gold-plating, especially with less experienced engineers. The only responsibilities of management are to define exactly where the goal is and to keep your engineers heading in that direction. Engineers will frequently attempt to take side turnings to improve features, and it's up to management to decide whether that side turning will speed things up long-term, or whether it's just improving for the same of improving.

The point of agile development is that each new piece of work should be as good as required to meet that sprint AND NO BETTER!!!!!! Yes, that's about the most emphasis I can add in StackOverflow - and it's still not enough emphasis. If you find that people are adding stuff that's not required right this second then they need training on how to do agile development correctly.

2
  • This also bears the risk of piecemeal work, kludge and quick and dirty solutions. Often, management is not familiar with software issues and will decide to only schedule what some customer actually asks for. Core issues will not get scheduled, but one dirty workaround after another for them. Like: "we don't have time to get the integration tests for that module running, we have a dozen bug reports in the pipe for it!" It forbids some dev best practices, like the campsite rule (leave the garbage until you can't walk over it any more).
    – Erik Hart
    Commented Mar 25, 2016 at 14:42
  • @ErikHart That's entirely true, and that's the core philosophy of agile development which you need to grok. You're not working for your own satisfaction, you're working for the customer's satisfaction. Testing is not an optional extra though, so if the workarounds are making testing take longer then your estimates need to clearly show that. At some point the extra testing to check your workarounds all work will outweigh the effort to just fix it.
    – Graham
    Commented Mar 31, 2016 at 10:36
0

"When is it fair to look at the quality of the developers?"

All the time. Obviously some people are more productive than others, you dont need an excuse as their employer to measure their performance.

The tricky bit is how you do it. My advice is to hire some experienced contracters to work along side your perm staff on the same set of tasks estimated by your perm guys and see if they have a higher velocity.

This will give you a good comparisom with the current market without locking you into a long term hire.

It might also give the perm guys a bit of a kick up the arse.

Additionaly, if they complain the contractors are skipping on quality etc to gain velocity then that will drive a conversation about where the business value is. Long term maintainablity or short term products shipped.

If it is the long term stuff then it will force you to quantify it and put it down in the sprint as requirments!

3
  • 2
    "..work along side your perm staff on the same set of tasks estimated by your perm guys and see if they have a higher velocity.." - right, and both the employee and the contractor should be implementing the very same feature (without seeing each other's work) right? That, for the measurement to be fair. Doesn't sound very feasible to me. Commented Mar 25, 2016 at 16:17
  • ? Not implement the features twice. That would be crazy. I meam work in the team. But let the orional guys do the estimates
    – Ewan
    Commented Mar 25, 2016 at 20:11
  • obvs if the news guys estimated the feaures they worked on you wouldnt know whether they were just easy tasks
    – Ewan
    Commented Mar 25, 2016 at 20:13
0

There are already several excellent answers. In particular, bad estimation, over-commitment, and/or unscheduled work are frequent causes of slippage.

But I'm curious as to why "[your] developers choose the features they want to include in each sprint". The developers should typically be working on the features with the highest priority first -- and priority is a business decision, i.e. that should be coming from the product owner acting as a proxy for the business stakeholders.
(There are exceptions to this. In particular, high-risk features are generally worked earlier. And in some cases a user-facing feature may depend on other functionality, e.g. "we really need to add a database before we can implement X".)

On the other hand, estimates are technical decisions, and should not be made (or second-guessed) by business people. You don't say anything about this -- I raise the point only because, in my experience, when developers are choosing what to work on, it's fairly common for the business people to try to dictate how long it should take.

It sounds like you have a fairly dysfunctional process. I would recommend against bringing in developer consultants, at least for the time being, because that is probably going to have a negative effect on morale. But it sounds like your organization could use some help on the project management side. That's where I would start, by bringing in an experienced agile coach -- if not for a medium to long-term engagement, then at least for an assessment or "health check". A good coach will tell you if you have underperforming developers, but at least this way, it's the entire team (not just the devs) who are under scrutiny.


One other observation: in my experience it's very difficult to make scrum succeed as a project management methodology if you're not also following good development processes. Are you doing automated unit testing? or even better, automated acceptance testing? Are your devs pairing, or do you at least perform frequent code reviews and/or walkthroughs? Are you practicing some form of continuous integration?

Not the answer you're looking for? Browse other questions tagged or ask your own question.