SlideShare a Scribd company logo
UNIT-I
Conventional Software Management
1. The best thing about software is its flexibility:
- It can be programmed to do almost anything.
2. The worst thing about software is its flexibility:
- The “almost anything” characteristic has made it difficult to plan, monitor, and control software
Development.
3. In the mid-1990s, three important analyses were performed on the software engineering
industry.
All three analyses given the same general conclusion:-
“The success rate for software projects is very low”.
They summarized as follows:
1. Software development is still highly unpredictable. Only 10% of software projects are
delivered successfully Within initial budget and scheduled time.
2. Management discipline is more differentiator in success or failure than are technology
advances.
3. The level of software scrap and rework is indicative of an immature process.
Software management process framework:
WATERFALLL MODEL
1. It is the baseline process for most conventional software projects have used.
2. We can examine this model in two ways:
i. IN THEORY
ii. IN PRACTICE
IN THEORY:-
In 1970, Winston Royce presented a paper called “Managing the Development of Large Scale
Software Systems” at IEEE WESCON.
Where he made three primary points:
1. There are two essential steps common to the development of computer programs:
- Analysis
- coding
2. In order to manage and control all of the intellectual freedom associated with software
development one should follow the following steps:
- System requirements definition
- Program design and testing
These steps addition to the analysis and coding steps
3. Since the testing phase is at the end of the development cycle in the waterfall model, it may be
risky and invites failure.
So we need to do either the requirements must be modified or a substantial design changes is
warranted by breaking the software in to different pieces.
-There are five improvements to the basic waterfall model that would eliminate most of the
development risks are as follows:
a) Complete program design before analysis and coding begin (program design comes first):-
- By this technique, the program designer give surety that the software will not fail because of
storage, timing, and data fluctuations.
- Begin the design process with program designer, not the analyst or programmers.
- Write an overview document that is understandable, informative, and current so that every
worker on the project can gain an elemental understanding of the system.
b) Maintain current and complete documentation (Document the design):-
-It is necessary to provide a lot of documentation on most software programs.
- Due to this, helps to support later modifications by a separate test team, a separate maintenance
team, and Operations personnel who are not software literate.
c) Do the job twice, if possible (Do it twice):-
- If a computer program is developed for the first time, arrange matters so that the version finally
delivered to the customer for operational deployment is actually the second version insofar as
critical design/operations are concerned.
- “Do it N times” approach is the principle of modern-day iterative development.
d) Plan, control, and monitor testing:-
- The biggest user of project resources is the test phase. This is the phase of greatest risk in terms
of cost and schedule.
- In order to carryout proper testing the following things to be done:
i) Employ a team of test specialists who were not responsible for the original design.
ii) Employ visual inspections to spot the obvious errors like dropped minus signs, missing factors
of two, jumps to wrong addresses.
iii) Test every logic phase.
iv) Employ the final checkout on the target computer.
e) Involve the customer:-
- It is important to involve the customer in a formal way so that he has committed himself at
earlier points before final delivery by conducting some reviews such as,
i) Preliminary software review during preliminary program design step.
ii) Critical software review during program design.
iii) Final software acceptance review following testing.
IN PRACTICE:-
- Whatever the advices that are given by the software developers and the theory behind the
waterfall model, some software projects still practice the conventional software management
approach.
Projects intended for trouble frequently exhibit the following symptoms:
i) Protracted (delayed) integration
- In the conventional model, the entire system was designed on paper, then implemented all at
once, then integrated. Only at the end of this process was it possible to perform system testing to
verify that the fundamental architecture was sound.
- Here the testing activities consume 40% or more life-cycle resources. ACTIVITY COST
Management 5%
Requirements 5%
Design 10%
Code and unit testing 30%
Integration and test 40%
Deployment 5%
Environment 5%
ii) Late Risk Resolution
- A serious issues associated with the waterfall life cycle was the lack of early risk
resolution.
iii) Requirements-Driven Functional Decomposition
-Traditionally, the software development process has been requirement-driven: An attempt is
made to provide a precise requirements definition and then to implement exactly those
requirements.
-This approach depends on specifying requirements completely and clearly before other
development activities.
iv) Adversarial Stakeholder Relationships
The following sequence of events was typical for most contractual software efforts:
- The contractor prepared a draft contact-deliverable document that captured an intermediate
artifact and delivered it to the customer for approval.
Project Stakeholders :
Stakeholders are the people involved in or affected by project activities
Stakeholders include
the project sponsor and project team
support staff
customers
users
suppliers
opponents to the project
CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
Barry Boehm’s Top 10 “Industrial Software Metrics”:
1) Finding and fixing a software problem after delivery costs 100 times more than finding and
fixing the problem in early design phases.
2) You can compress software development schedules 25% of nominal (small), but no more.
3) For every $1 you spend on development, you will spend $2 on maintenance.
4) Software development and maintenance costs are primarily a function of the number of source
lines of code.
5) Variations among people account for the biggest difference in software productivity.
6) The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985,
85:15.
7) Only about 15% of software development effort is devoted to programming.
8) Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products cost 9 times as much.
9) Walkthroughs catch 60% of the errors.
10) 80% of the contribution comes from 20% of the contributors.
- 80% of the engineering is consumed by 20% of the requirements.
- 80% of the software cost is consumed by 20% of the components.
- 80% of the errors are caused by 20% of the components.
- 80% of the software scrap and rework is caused by 20% of the errors.
- 80% of the resources are consumed by 20% of the components.
- 80% of the engineering is accomplished by 20% of the tools.
- 80% of the progress is made by 20% of the people.
Evolution of Software Economics
Economics means System of interrelationship of money, industry and employment.
SOFTWAR ECONOMICS:-
The cost of the software can be estimated by considering the following things as parameters to a
Function.
1) Size: Which is measured in terms of the number of Source Lines Of Code or the number of
Function points required to develop the required functionality.
2) Process: Used to produce the end product, in particular the ability of the process is to avoid no
value- adding activities (rework, bureaucratic delays, and communications overhead).
3) Personnel: The capabilities of software engineering personnel, and particularly their
experience
With the computer science issues and the application domain issues of the project.
4) Environment: Which is made up of the tools and techniques available to support efficient
Software development and to automate the process.
5) Quality: It includes its features, performance, reliability, and flexibility.
The relationship among these parameters and estimated cost can be calculated by using,
Effort = (Personnel) (Environment) (Quality) (Size Process)
1) Conventional: 1960 and 1970, Craftsmanship. Organizations used custom tools, custom
processes, and virtually all custom components built in primitive languages. Project performance
was highly predictable.
2) Transition: 1980 and 1990, software engineering. Organizations used more-repeatable
processes and off-the-shelf tools, and mostly (>70%) custom components built in higher level
languages.
- Some of the components (<30%) were available as commercial products like, OS, DBMS,
Networking and GUI.
3) Modern practices: 2000 and later, software production.
Spm unit1
Spm unit1
PRAGMATIC SOFTWARE ESTIMATION:
- If there is no proper well-documented case studies then it is difficult to estimate the cost of the
Software. It is one of the critical problem in software cost estimation.
- But the cost model vendors claim that their tools are well suitable for estimating iterative
Development projects.
- In order to estimate the cost of a project the following three topics should be considered,
1) Which cost estimation model to use.
2) Whether to measure software size in SLOC or function point.
3) What constitutes a good estimate?
- There are a lot of software cost estimation models are available such as, COCOMO,
CHECKPOINT, ESTIMACS, Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST,
and SPQR/20.
- Of which COCOMO is one of the most open and well-documented cost estimation models
- The software size can be measured by using
1) SLOC 2) Function points
- Most software experts argued that the SLOC is a poor measure of size. But it has some value in
the software Industry.
- SLOC worked well in applications that were custom built why because of easy to automate and
instrument.
- Now a days there are so many automatic source code generators are available and there are so
many advanced higher-level languages are available. So SLOC is a uncertain measure.
- The main advantage of function points is that this method is independent of the technology and
is therefore a much better primitive unit for comparisons among projects and organizations.
- The main disadvantage of function points is that the primitive definitions are abstract and
measurements are not easily derived directly from the evolving artifacts.
SLOC becomes a more useful and precise measurement basis of various metrics perspectives.
- The most real-world use of cost models is bottom-up rather than top-down.
- The software project manager defines the target cost of the software, then manipulates the
parameters and sizing until the target cost can be justified.
Spm unit1

More Related Content

Spm unit1

  • 1. UNIT-I Conventional Software Management 1. The best thing about software is its flexibility: - It can be programmed to do almost anything. 2. The worst thing about software is its flexibility: - The “almost anything” characteristic has made it difficult to plan, monitor, and control software Development. 3. In the mid-1990s, three important analyses were performed on the software engineering industry. All three analyses given the same general conclusion:- “The success rate for software projects is very low”. They summarized as follows: 1. Software development is still highly unpredictable. Only 10% of software projects are delivered successfully Within initial budget and scheduled time. 2. Management discipline is more differentiator in success or failure than are technology advances. 3. The level of software scrap and rework is indicative of an immature process. Software management process framework: WATERFALLL MODEL 1. It is the baseline process for most conventional software projects have used. 2. We can examine this model in two ways: i. IN THEORY ii. IN PRACTICE IN THEORY:- In 1970, Winston Royce presented a paper called “Managing the Development of Large Scale Software Systems” at IEEE WESCON. Where he made three primary points: 1. There are two essential steps common to the development of computer programs: - Analysis - coding 2. In order to manage and control all of the intellectual freedom associated with software development one should follow the following steps: - System requirements definition - Program design and testing
  • 2. These steps addition to the analysis and coding steps 3. Since the testing phase is at the end of the development cycle in the waterfall model, it may be risky and invites failure. So we need to do either the requirements must be modified or a substantial design changes is warranted by breaking the software in to different pieces. -There are five improvements to the basic waterfall model that would eliminate most of the development risks are as follows: a) Complete program design before analysis and coding begin (program design comes first):- - By this technique, the program designer give surety that the software will not fail because of storage, timing, and data fluctuations. - Begin the design process with program designer, not the analyst or programmers. - Write an overview document that is understandable, informative, and current so that every worker on the project can gain an elemental understanding of the system. b) Maintain current and complete documentation (Document the design):- -It is necessary to provide a lot of documentation on most software programs. - Due to this, helps to support later modifications by a separate test team, a separate maintenance team, and Operations personnel who are not software literate. c) Do the job twice, if possible (Do it twice):- - If a computer program is developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operations are concerned.
  • 3. - “Do it N times” approach is the principle of modern-day iterative development. d) Plan, control, and monitor testing:- - The biggest user of project resources is the test phase. This is the phase of greatest risk in terms of cost and schedule. - In order to carryout proper testing the following things to be done: i) Employ a team of test specialists who were not responsible for the original design. ii) Employ visual inspections to spot the obvious errors like dropped minus signs, missing factors of two, jumps to wrong addresses. iii) Test every logic phase. iv) Employ the final checkout on the target computer. e) Involve the customer:- - It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery by conducting some reviews such as, i) Preliminary software review during preliminary program design step. ii) Critical software review during program design. iii) Final software acceptance review following testing. IN PRACTICE:- - Whatever the advices that are given by the software developers and the theory behind the waterfall model, some software projects still practice the conventional software management approach. Projects intended for trouble frequently exhibit the following symptoms: i) Protracted (delayed) integration - In the conventional model, the entire system was designed on paper, then implemented all at once, then integrated. Only at the end of this process was it possible to perform system testing to verify that the fundamental architecture was sound. - Here the testing activities consume 40% or more life-cycle resources. ACTIVITY COST Management 5% Requirements 5% Design 10% Code and unit testing 30% Integration and test 40% Deployment 5% Environment 5% ii) Late Risk Resolution - A serious issues associated with the waterfall life cycle was the lack of early risk resolution. iii) Requirements-Driven Functional Decomposition -Traditionally, the software development process has been requirement-driven: An attempt is made to provide a precise requirements definition and then to implement exactly those requirements. -This approach depends on specifying requirements completely and clearly before other development activities. iv) Adversarial Stakeholder Relationships The following sequence of events was typical for most contractual software efforts:
  • 4. - The contractor prepared a draft contact-deliverable document that captured an intermediate artifact and delivered it to the customer for approval. Project Stakeholders : Stakeholders are the people involved in or affected by project activities Stakeholders include the project sponsor and project team support staff customers users suppliers opponents to the project CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE Barry Boehm’s Top 10 “Industrial Software Metrics”: 1) Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases. 2) You can compress software development schedules 25% of nominal (small), but no more. 3) For every $1 you spend on development, you will spend $2 on maintenance. 4) Software development and maintenance costs are primarily a function of the number of source lines of code. 5) Variations among people account for the biggest difference in software productivity. 6) The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in 1985, 85:15. 7) Only about 15% of software development effort is devoted to programming.
  • 5. 8) Software systems and products typically cost 3 times as much per SLOC as individual software programs. Software-system products cost 9 times as much. 9) Walkthroughs catch 60% of the errors. 10) 80% of the contribution comes from 20% of the contributors. - 80% of the engineering is consumed by 20% of the requirements. - 80% of the software cost is consumed by 20% of the components. - 80% of the errors are caused by 20% of the components. - 80% of the software scrap and rework is caused by 20% of the errors. - 80% of the resources are consumed by 20% of the components. - 80% of the engineering is accomplished by 20% of the tools. - 80% of the progress is made by 20% of the people. Evolution of Software Economics Economics means System of interrelationship of money, industry and employment. SOFTWAR ECONOMICS:- The cost of the software can be estimated by considering the following things as parameters to a Function. 1) Size: Which is measured in terms of the number of Source Lines Of Code or the number of Function points required to develop the required functionality. 2) Process: Used to produce the end product, in particular the ability of the process is to avoid no value- adding activities (rework, bureaucratic delays, and communications overhead). 3) Personnel: The capabilities of software engineering personnel, and particularly their experience With the computer science issues and the application domain issues of the project. 4) Environment: Which is made up of the tools and techniques available to support efficient Software development and to automate the process. 5) Quality: It includes its features, performance, reliability, and flexibility. The relationship among these parameters and estimated cost can be calculated by using, Effort = (Personnel) (Environment) (Quality) (Size Process) 1) Conventional: 1960 and 1970, Craftsmanship. Organizations used custom tools, custom processes, and virtually all custom components built in primitive languages. Project performance was highly predictable. 2) Transition: 1980 and 1990, software engineering. Organizations used more-repeatable processes and off-the-shelf tools, and mostly (>70%) custom components built in higher level languages. - Some of the components (<30%) were available as commercial products like, OS, DBMS, Networking and GUI. 3) Modern practices: 2000 and later, software production.
  • 8. PRAGMATIC SOFTWARE ESTIMATION: - If there is no proper well-documented case studies then it is difficult to estimate the cost of the Software. It is one of the critical problem in software cost estimation. - But the cost model vendors claim that their tools are well suitable for estimating iterative Development projects. - In order to estimate the cost of a project the following three topics should be considered, 1) Which cost estimation model to use. 2) Whether to measure software size in SLOC or function point. 3) What constitutes a good estimate? - There are a lot of software cost estimation models are available such as, COCOMO, CHECKPOINT, ESTIMACS, Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20. - Of which COCOMO is one of the most open and well-documented cost estimation models - The software size can be measured by using 1) SLOC 2) Function points - Most software experts argued that the SLOC is a poor measure of size. But it has some value in the software Industry. - SLOC worked well in applications that were custom built why because of easy to automate and instrument. - Now a days there are so many automatic source code generators are available and there are so many advanced higher-level languages are available. So SLOC is a uncertain measure. - The main advantage of function points is that this method is independent of the technology and is therefore a much better primitive unit for comparisons among projects and organizations. - The main disadvantage of function points is that the primitive definitions are abstract and measurements are not easily derived directly from the evolving artifacts. SLOC becomes a more useful and precise measurement basis of various metrics perspectives. - The most real-world use of cost models is bottom-up rather than top-down. - The software project manager defines the target cost of the software, then manipulates the parameters and sizing until the target cost can be justified.