SlideShare a Scribd company logo
From Sage 500 to 1000

Performance Testing myths exposed
Richard Bishop
From Sage 500 to 1000
…………..Performance Testing myths exposed

 Introduction
 Trust IV & Richard Bishop
 Project Background
 8 myths of (non-functional)
software testing
 What we did…breaking myths
 Conclusion
Introduction
Who are we?


Trust IV
• Founded in 2005
• Testing consultancy, specialising in automated non-functional testing



Richard Bishop, Trust IV Ltd
• IT consultant with 20 years experience
• Specialising in Microsoft platforms and performance engineering / testing
• HP specialist, UK leader of Vivit (HP software user group in UK)



Colleagues
• Mixture of consultants and contract resources
• Primarily HP LoadRunner specialists
• On customer sites and working remotely
• Skills in multiple test tools, platforms etc.
Non-Functional Testing
What on earth is NFT?
Non-functional, automated testing specialists
What on earth is NFT?
In a nutshell……….
Usability, reliability and scalability.
Compatibility testing
Compliance testing
Security Testing
Backup and Disaster Recovery Testing
Load Testing
Performance Testing
Scalability Testing
Stress Testing

y
usabilit
liability
re
ity
scalabil
Project Background
University Hospital Birmingham






Sage 500 to Sage 1000 migration
Concerns re: numbers of concurrent users supported
Required performance test to validate potential maximum user load
Test to include a single “user journey”, simulating a requisition process
Objective is to increase user load until system failure
8 Myths
(of non-functional software testing)









The vendor/developer has already tested this so we don't need to
NFT not required if functional testing and UAT is OK
Can / Can’t test in live environment
Web applications are easy to test using low-cost / open source tools
Anyone can test……
If a test fails “it must be the test tool / tester's fault”
Testing is too expensive / time consuming
If it's slow we can “throw kit at it”
Myth 1
Vendor/developer already tested - we don’t need to…
Myth 2
NFT not required if functional testing and UAT is OK
Single user

DB Server
Load balancer

Forms Server
IIS servers
Myth 2
NFT not required if functional testing and UAT is OK
Single user

Multi user

DB Server
Load balancer

DB Server
Load balancer

Forms Server
IIS servers

Forms Server
IIS servers
Myth 2
NFT not required if functional testing and UAT is OK
Single user

Multi user

DB Server
Load balancer

DB Server
Load balancer

Forms Server
IIS servers

Forms Server
IIS servers
Myth 2
NFT not required if functional testing and UAT is OK
Single user

Multi user

DB Server
Load balancer

DB Server
Load balancer

Forms Server
IIS servers

Forms Server
IIS servers
Myth 2
NFT not required if functional testing and UAT is OK
Single user

Multi user

DB Server
Load balancer

DB Server
Load balancer

Forms Server
IIS servers

Forms Server
IIS servers
Myth 3
You can/can’t test in a live environment

..…happen

results can be unreliable…..
Myth 3
You can/can’t test in a live environment
You can…..

….. prior to launch
…..or with extremely careful planning

…..or by mistake
Myth 3
Live environments.…. a cautionary tale
Myth 4
Anyone can test an application.

Source: http://www.pixar.com/short_films/Theatrical-Shorts/Lifted
Myth 5
Web apps are easy to test using low-cost/no-cost tools
Myth 5
Web apps are easy to test using low-cost/no-cost tools



Myth 6
If a test fails "it must be the test tool / tester's fault“
Myth 7
Testing is too expensive / time consuming

“The money spent with Trust IV was the best
money spent on the whole project”
Myth 8
If it's slow we can "throw kit at it"
What we did….
Our standard test approach
POC

Analysis

Performance
tests

Scripting

Low vol.
tests
POC- January 2013
We had a “steroid ferret”


NOT “just” a web app
POC- January 2013
Not a “web only app”


“Not just a web app“



Can use low cost / open source tools to test.


POC- January 2013
Needed specialist skills


“Not just a web app“



Can use low cost / open source tools to test.



Anyone can test


Digging deeper…

SAGE 1000 uses two communications protocols
Digging deeper…

SAGE 1000 uses two communications protocols
Had to convert displayed “human readable” text to legacy formats
to allow SAGE 1000 to interpret our simulated user input…
Digging deeper…

Complex test date requirements
Digging deeper…
Correlation

135532676
=
0x08141084
=
0xA 0x08 0x14, 0x10, 0x84 0xD
What we did
Our plan…



Script simulating login and requisition
1000 user accounts, 30,000 products

BUT
 Application update in January, meant re-coding, added delay 
 Initial tests showed problems with scalability…before requisition step fully scripted 
•
•
•

Problems @ 20 user load
Blank screens / HTTP 500 errors
Spent time proving test tool
Manual tests with volunteers
& network traces to prove
simulation equivalent to real users
Iterative testing began…

…modified test approach as we found defects
Initial tests






V. high thread
count
> 2300 threads
associated with
the JCSP process
V. High context
switches rate
> 17,000
switches / sec
Initial Tests
Key observations

“System Idle”

“Manual test”

“Automated test”

<2%

<2%

<6%

Available MBytes

30.1 GB

28.73 GB

27.3 GB

JCSP thread count

82

290

371

Total thread count

797

1932

2446

Context switches/sec

275

2000

2500

Processor queue length

<1

<1

<1

CPU utilisation

High thread count and context switching are key issues
Initial Tests
User experience


Response times
> 20 seconds
Next steps
Reconfiguration & re-test


We made recommendations to improve performance
We asked Datel to check:
•Heap size
•Application pool size
•Timeout values



Datel reconfigured application server:
•Encrypted login credentials within the application
•Altered TCP/IP timeout values and keep alives
•Set lifetime session limit to 30 minutes
•Registry changes
Next Steps
Re-test


Re-tested, but….
Re-test
Observations

Next steps - retest
Despite load
balancer problem
Response times
consistent
No degradation
over time
Further tests
Test stats
1. 230 users
1289 logins / hr
2. 250 users
1383 logins / hr
3. 500 users
2715 logins / hr
4. 500 users
5412 logins / hr
Final report
….some caveats


We noticed large numbers of HTTP 404 errors



Still missed some “asynchronous” traffic



Hadn’t tested complex application flows, due primarily to time constraints
Conclusion
What have we learned ?


You probably need to test
• Reduced response times for SAGE 1000 login
from > 20 seconds (and timeouts) to ≈ 3 s
• Application worked, just not our particular configuration



Testing needn’t be expensive
• Thanks to UHB for the endorsement



Don’t trust vendors (or developers) to test
Conclusion
What have we learned ?






You probably need to test
• Reduced response times for SAGE 1000 login
from > 20 seconds (and timeouts) to ≈ 3 s
• Application worked, just not our particular configuration

N,
KE
O

BR
BE ……
TO OK
RE IS
Testing needn’t be expensive
HE HS
T
• Thanks to UHB for the T
endorsement T
N’ M Y
E G
R(or developers) to test
Don’t trust S A
vendors
E
KI N
UL REA
R
TB
BU
Q&A

richard.bishop@trustiv.co.uk
@richardbishop @TrustIV
0844 870 0301

More Related Content

From Sage 500 to 1000 ... Performance Testing myths exposed

  • 1. From Sage 500 to 1000 Performance Testing myths exposed Richard Bishop
  • 2. From Sage 500 to 1000 …………..Performance Testing myths exposed  Introduction  Trust IV & Richard Bishop  Project Background  8 myths of (non-functional) software testing  What we did…breaking myths  Conclusion
  • 3. Introduction Who are we?  Trust IV • Founded in 2005 • Testing consultancy, specialising in automated non-functional testing  Richard Bishop, Trust IV Ltd • IT consultant with 20 years experience • Specialising in Microsoft platforms and performance engineering / testing • HP specialist, UK leader of Vivit (HP software user group in UK)  Colleagues • Mixture of consultants and contract resources • Primarily HP LoadRunner specialists • On customer sites and working remotely • Skills in multiple test tools, platforms etc.
  • 4. Non-Functional Testing What on earth is NFT? Non-functional, automated testing specialists What on earth is NFT? In a nutshell………. Usability, reliability and scalability. Compatibility testing Compliance testing Security Testing Backup and Disaster Recovery Testing Load Testing Performance Testing Scalability Testing Stress Testing y usabilit liability re ity scalabil
  • 5. Project Background University Hospital Birmingham      Sage 500 to Sage 1000 migration Concerns re: numbers of concurrent users supported Required performance test to validate potential maximum user load Test to include a single “user journey”, simulating a requisition process Objective is to increase user load until system failure
  • 6. 8 Myths (of non-functional software testing)         The vendor/developer has already tested this so we don't need to NFT not required if functional testing and UAT is OK Can / Can’t test in live environment Web applications are easy to test using low-cost / open source tools Anyone can test…… If a test fails “it must be the test tool / tester's fault” Testing is too expensive / time consuming If it's slow we can “throw kit at it”
  • 7. Myth 1 Vendor/developer already tested - we don’t need to…
  • 8. Myth 2 NFT not required if functional testing and UAT is OK Single user DB Server Load balancer Forms Server IIS servers
  • 9. Myth 2 NFT not required if functional testing and UAT is OK Single user Multi user DB Server Load balancer DB Server Load balancer Forms Server IIS servers Forms Server IIS servers
  • 10. Myth 2 NFT not required if functional testing and UAT is OK Single user Multi user DB Server Load balancer DB Server Load balancer Forms Server IIS servers Forms Server IIS servers
  • 11. Myth 2 NFT not required if functional testing and UAT is OK Single user Multi user DB Server Load balancer DB Server Load balancer Forms Server IIS servers Forms Server IIS servers
  • 12. Myth 2 NFT not required if functional testing and UAT is OK Single user Multi user DB Server Load balancer DB Server Load balancer Forms Server IIS servers Forms Server IIS servers
  • 13. Myth 3 You can/can’t test in a live environment ..…happen results can be unreliable…..
  • 14. Myth 3 You can/can’t test in a live environment You can….. ….. prior to launch …..or with extremely careful planning …..or by mistake
  • 15. Myth 3 Live environments.…. a cautionary tale
  • 16. Myth 4 Anyone can test an application. Source: http://www.pixar.com/short_films/Theatrical-Shorts/Lifted
  • 17. Myth 5 Web apps are easy to test using low-cost/no-cost tools
  • 18. Myth 5 Web apps are easy to test using low-cost/no-cost tools  
  • 19. Myth 6 If a test fails "it must be the test tool / tester's fault“
  • 20. Myth 7 Testing is too expensive / time consuming “The money spent with Trust IV was the best money spent on the whole project”
  • 21. Myth 8 If it's slow we can "throw kit at it"
  • 23. Our standard test approach POC Analysis Performance tests Scripting Low vol. tests
  • 24. POC- January 2013 We had a “steroid ferret”  NOT “just” a web app
  • 25. POC- January 2013 Not a “web only app”  “Not just a web app“  Can use low cost / open source tools to test. 
  • 26. POC- January 2013 Needed specialist skills  “Not just a web app“  Can use low cost / open source tools to test.  Anyone can test 
  • 27. Digging deeper… SAGE 1000 uses two communications protocols
  • 28. Digging deeper… SAGE 1000 uses two communications protocols Had to convert displayed “human readable” text to legacy formats to allow SAGE 1000 to interpret our simulated user input…
  • 29. Digging deeper… Complex test date requirements
  • 31. What we did Our plan…   Script simulating login and requisition 1000 user accounts, 30,000 products BUT  Application update in January, meant re-coding, added delay   Initial tests showed problems with scalability…before requisition step fully scripted  • • • Problems @ 20 user load Blank screens / HTTP 500 errors Spent time proving test tool Manual tests with volunteers & network traces to prove simulation equivalent to real users
  • 32. Iterative testing began… …modified test approach as we found defects
  • 33. Initial tests     V. high thread count > 2300 threads associated with the JCSP process V. High context switches rate > 17,000 switches / sec
  • 34. Initial Tests Key observations “System Idle” “Manual test” “Automated test” <2% <2% <6% Available MBytes 30.1 GB 28.73 GB 27.3 GB JCSP thread count 82 290 371 Total thread count 797 1932 2446 Context switches/sec 275 2000 2500 Processor queue length <1 <1 <1 CPU utilisation High thread count and context switching are key issues
  • 36. Next steps Reconfiguration & re-test  We made recommendations to improve performance We asked Datel to check: •Heap size •Application pool size •Timeout values  Datel reconfigured application server: •Encrypted login credentials within the application •Altered TCP/IP timeout values and keep alives •Set lifetime session limit to 30 minutes •Registry changes
  • 38. Re-test Observations Next steps - retest Despite load balancer problem Response times consistent No degradation over time
  • 39. Further tests Test stats 1. 230 users 1289 logins / hr 2. 250 users 1383 logins / hr 3. 500 users 2715 logins / hr 4. 500 users 5412 logins / hr
  • 40. Final report ….some caveats  We noticed large numbers of HTTP 404 errors  Still missed some “asynchronous” traffic  Hadn’t tested complex application flows, due primarily to time constraints
  • 41. Conclusion What have we learned ?  You probably need to test • Reduced response times for SAGE 1000 login from > 20 seconds (and timeouts) to ≈ 3 s • Application worked, just not our particular configuration  Testing needn’t be expensive • Thanks to UHB for the endorsement  Don’t trust vendors (or developers) to test
  • 42. Conclusion What have we learned ?    You probably need to test • Reduced response times for SAGE 1000 login from > 20 seconds (and timeouts) to ≈ 3 s • Application worked, just not our particular configuration N, KE O BR BE …… TO OK RE IS Testing needn’t be expensive HE HS T • Thanks to UHB for the T endorsement T N’ M Y E G R(or developers) to test Don’t trust S A vendors E KI N UL REA R TB BU

Editor's Notes

  1. u