This document discusses performance assurance for packaged applications like Oracle Enterprise Performance Management. It outlines key steps for performance assurance including defining requirements, designing for best practices, verifying performance during development, testing, and monitoring production. Performance testing is recommended to mitigate risks, though it requires realistic loads and careful scripting. A top-down approach is advocated for performance troubleshooting, examining hardware, configuration, design and logs before suspecting product issues. Examples of common performance problems and their solutions are also provided.
This document provides an overview of key considerations for planning and implementing a SharePoint backup and recovery solution. It discusses scoping requirements with stakeholders, defining service level agreements, technical architecture options, policy and process documentation, testing procedures, training, and governance. The presentation aims to give attendees a holistic view of the end-to-end backup lifecycle for SharePoint.
GLOC 2018: Automation or How We Eliminated Manual EBS R12.2 Upgrades and Beca...
ennVee's presentation from the 2018 Great Lakes Oracle Conference in Cleveland, Ohio. Session hosted by Joe Bong (Vice President) and Veera Venugopal (Head of Delivery). Topics include automation best practices for upgrading to Oracle E-Business Suite (EBS) R12.2, and the "Voice of the Customer"; a collection of hundreds of survey responses from IT leaders that have or plan to upgrade to R12.2, top challenges, objectives, and timelines, etc.
In today's world its critical to have visibility on every task that is delegated to another employee and it more important to collaborate effectively to achieve the common goal of an organization. Q-Track task management software helps you to achieve the same.
This document discusses different software development life cycle (SDLC) models including iterative and spiral models. The iterative model involves building a product incrementally in iterations, with requirements evolving in each iteration based on user feedback. The spiral model similarly progresses in iterations but places more emphasis on risk analysis. Each spiral involves planning, risk analysis, engineering, and evaluation phases. The document also covers advantages and disadvantages of each model, as well as the role of management in software projects, including planning, monitoring and control, and termination analysis.
ALM-PLM Integration with Business Process Management
This document discusses integrating ALM (Application Lifecycle Management) and PLM (Product Lifecycle Management) systems through business process management. It outlines key capabilities like process integration to manage hardware and software requirements together. Connecting ALM and PLM can be done through data exchange and workflow integration. Benefits include increased transparency, faster time to market, cost savings, and end-to-end traceability across all product assets. The presenter invites attendees to a follow up webinar on achieving gapless end-to-end traceability.
Capstone Technology Canada - Advanced Process Control Project Lifecycle
Capstone Technology Canada is an advanced process control technology company focusing on the oil and gas industry. It provides various advanced process control solutions including model predictive control, multivariate modeling, and real-time optimization. Capstone follows a standard 9-step APC project lifecycle that includes benefits analysis, dynamic modeling, controller design, commissioning, training, and long-term maintenance. Its experienced team and local presence help ensure successful APC implementation and utilization.
This document provides information about software testing presented by Rounak Shaik from Talent Sprint. It discusses the basics of software testing including an overview, importance in the IT industry, career growth, and the future workplace. It defines key terms like errors, defects, bugs, and failures. It describes why testing is important for quality, delivering error-free systems, and avoiding costs from failures. It also outlines different types of testing like manual and automation testing.
Isha Training Solutions Presents "Performance Engineering" course.
For Course content and other information, pls follow below link
http://ishatrainingsolutions.org/performance-engineering/
Live project support is provided on any performance testing tools and also any protocols under the roof -- Call me or Whatsapp me on +91-8019952427
-----------------------------------------------------------------------------------------------------------------------------------
Other Courses Offered by ISHA
1) Performance Engineering Course
http://ishatrainingsolutions.org/performance-engineering/
2) Cloud Performance Engineering in DevOps - Core to Master Level http://ishatrainingsolutions.org/cloud-performance-engineering-devops-the-complete-course/
3) AppDynamics
http://ishatrainingsolutions.org/app-dynamics/
4) Dynatrace
http://ishatrainingsolutions.org/dynatrace-training/
5) Jmeter Core to Master Level
http://ishatrainingsolutions.org/jmeter-core-to-master-level-course/
6) Performance Testing using LoadRunner
http://ishatrainingsolutions.org/microfocus-loadrunner/
7) Advanced LoadRunner
http://ishatrainingsolutions.org/advanced-scripting/
8) Web Services Performance Testing using LoadRunner http://ishatrainingsolutions.org/performance-testing-of-webservices-using-loadrunner-recorded-videos/
9) SAPGUI protocol - Performance Testing for SAP applications Using LoadRunner http://ishatrainingsolutions.org/loadrunner-sap-web-protocol/
10) TruClient Protocol Using LoadRunner
http://ishatrainingsolutions.org/true-client-protocol/
11) Mobile Performance Testing using LoadRunner and JMeter http://ishatrainingsolutions.org/mobile-performance-testing-using-loadrunner/
12) Performance Testing using NeoLoad
http://ishatrainingsolutions.org/performance-testing-using-neoload/
13) Splunk
http://ishatrainingsolutions.org/splunk-training/
14)Selenium
http://ishatrainingsolutions.org/2792-2/
*********************************
For further details, pls contact me.
Contact : Kumar Gupta
Call : +91-8019952427/
Whatsapp : +91-8019952427
kgupta.testingtraining@gmail.com
The document describes the key activities and concepts in software development processes including requirements analysis, specification, architecture, design, implementation, testing, deployment, and maintenance. It discusses various process models like waterfall, agile, iterative, RAD, and XP. It also covers supporting disciplines such as configuration management, documentation, quality assurance, and project management as well as development tools.
The document discusses how traditional product design methods often do not optimize key parameters like geometry, motion, forces, and tolerances. This can lead to increased costs from changes later in the design process. It introduces Enventive software as a way to model and optimize these parameters earlier on through computer simulations. Enventive allows users to represent functional intent, model kinematics and forces, perform tolerance analysis, and optimize parameters. This helps users design products that are more effective, manufacturable, and cost-efficient.
10 Ways to Better Application-Centric Service Management
Many IT organizations suffer from the nagging problems of Availability and Performance Management. In this presentation we will detail 10 Ways to Better Application-Centric Service Management, particularly with SAP environments.
Vishal Sarode has over 9 years of experience in SAP Basis administration. He has experience implementing, supporting, and upgrading SAP systems. Some of his responsibilities include user management, transport management, system monitoring, troubleshooting, and conducting system upgrades. He is proficient in AIX, Windows, and Z/OS operating systems and has experience with various SAP modules.
July webinar l How to Handle the Holiday Retail Rush with Agile Performance T...
In this Q&A-style webinar, you'll learn:
1. How and why to load test at least three months prior to the holidays
2. How to integrate CI/CD into your holiday load testing
3. How to determine and evaluate load curves
The document provides guidance on implementing an enterprise system in 6 main steps: 1) Project management to set up the team and scope, 2) Preparation including training, data collection, and setup, 3) Build the customer-specific implementation by configuring the system, 4) Prepare for roll-out with user documentation and training, 5) Deployment and go-live, and 6) Support and manage enhancement requests after launch. It emphasizes preparation, training, and a phased approach to ensure success.
The document proposes moving to a new Active Directory structure to make maintenance and monitoring easier while reducing costs. Consolidating servers across 40 stores could save ~40,000 per new store in hardware costs and ~5,40,000 per year in monitoring costs. Introducing single points of contact and proactive maintenance of Active Directory and other services through tools like Intel PAP would help address current problems of multiple contacts, lack of accountability, and speed up response and resolution times for IT issues.
The document discusses performance testing and provides details about:
1) The objectives of performance testing including validating requirements, checking capacity, and identifying issues.
2) The differences between performance, load, and stress testing.
3) Why performance testing is important including checking scalability, stability, availability, and gaining confidence.
4) Parameters to consider in performance testing like throughput, latency, efficiency, and degradation.
5) Potential sources of performance bottlenecks like the network, web server, application server, and database server.
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Integration strategies best practices- Mulesoft meetup April 2018
The document discusses best practices for integration strategies including using an integration platform, designing integrations, and implementing resiliency patterns. It recommends having an integration platform to provide features like batch processing, loose coupling, reuse, governance, and security. When designing integrations, questions about data, users, transactions, orchestrations, and future needs should be considered. Common resiliency patterns discussed are timeouts, circuit breakers, bulkheads, retries, and idempotency.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
This document discusses various concepts related to ensuring adequate performance of IT infrastructure. It covers perceived performance from an end user perspective and how to account for performance during infrastructure design. Methods discussed for evaluating performance during the design phase include benchmarking, leveraging vendor experience, prototyping, and user profiling. The document also addresses managing performance of running systems through techniques like performance testing, identifying and addressing bottlenecks, leveraging caching, and scaling infrastructure through vertical and horizontal expansion approaches like load balancing.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Beit 381 se lec 15 - 16 - 12 mar27 - req engg 1 of 3
The document provides an overview of requirements engineering as the first stage of the software development process. It discusses how requirements are initially vague and ill-defined, and must be precisely defined to guide implementation. Requirements engineering involves elicitation, analysis, and specification, with the output being a Software Requirements Specification document. The document outlines key aspects that should be included in an SRS, such as functional requirements, data requirements, performance requirements, design constraints, and guidelines. It also discusses techniques for requirements analysis like use case modeling and data flow diagrams.
The document discusses context-driven performance testing. It advocates for early performance testing using exploratory and continuous testing approaches in agile development. Testing should be done at the component level using various environments like cloud, lab, and production. Load can be generated through record and playback, programming, or using production workloads. Defining a performance testing strategy involves determining risks, appropriate tests, timing, and processes based on the project context. The strategy is part of an overall performance engineering approach.
The document discusses key concepts for designing IT infrastructure to ensure high performance. It covers perceived performance from a user perspective, benchmarking systems, profiling users to predict load, identifying and managing bottlenecks, scaling systems horizontally and vertically, load balancing, caching frequently used data, and designing systems based on their intended use to optimize performance. The overall goal is to design infrastructure that can meet performance requirements under all conditions, both currently and as load increases over time.
The document provides an overview of performance testing, including:
- Defining performance testing and comparing it to functional testing
- Explaining why performance testing is critical to evaluate a system's scalability, stability, and ability to meet user expectations
- Describing common types of performance testing like load, stress, scalability, and endurance testing
- Identifying key performance metrics and factors that affect software performance
- Outlining the performance testing process from planning to scripting, testing, and result analysis
- Introducing common performance testing tools and methodologies
- Providing examples of performance test scenarios and best practices for performance testing
Load testing with Visual Studio and Azure - Andrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
- JMeter is an open source load testing tool that can test web applications and other services. It uses virtual users to simulate real user load on a system.
- JMeter tests are prepared by recording HTTP requests using a proxy server. Tests are organized into thread groups and loops to simulate different user behaviors and loads.
- Tests can be made generic by using variables and default values so the same tests can be run against different environments. Assertions are added to validate responses.
- Tests are run in non-GUI mode for load testing and can be distributed across multiple machines for high user loads. Test results are analyzed using aggregated graphs and result trees.
This document provides an overview of various topics related to software project management. It begins with a list of suggested topics for discussion, such as challenges specific to software projects, quality measurements, and best practices in Pakistan. It then covers aspects of the software development lifecycle from planning and requirements through deployment and maintenance. Different project models like waterfall, evolutionary prototyping, and spiral development are described along with their advantages and disadvantages. Finally, it touches on using commercial off-the-shelf software.
Q-Track is an Enterprise Task Management solution which helps you to collaborate within organizations effectively and execute tasks on time and pro-actively. It helps managers to have track on delegated tasks and help their sub-ordinates to close them on exceptions. The solution has seamless integration with MS outlook and MS Projects.
Sonoco Products implemented FactoryTalk Metrics across 33 tube and core plants to increase visibility and boost efficiency. The standardized solution increased uptime by an average of 30% and reduced changeover times by 20%, allowing Sonoco to increase capacity by 3% while only requiring a 15% increase in resources. Integration with Oracle also provided a direct link between production data and financial reporting. Overall, the project delivered a repeatable and cost-effective approach to capturing real-time machine data for driving continuous improvement efforts.
This document provides an overview of key considerations for planning and implementing a SharePoint backup and recovery solution. It discusses scoping requirements with stakeholders, defining service level agreements, technical architecture options, policy and process documentation, testing procedures, training, and governance. The presentation aims to give attendees a holistic view of the end-to-end backup lifecycle for SharePoint.
GLOC 2018: Automation or How We Eliminated Manual EBS R12.2 Upgrades and Beca...ennVee TechnoGroup Inc
ennVee's presentation from the 2018 Great Lakes Oracle Conference in Cleveland, Ohio. Session hosted by Joe Bong (Vice President) and Veera Venugopal (Head of Delivery). Topics include automation best practices for upgrading to Oracle E-Business Suite (EBS) R12.2, and the "Voice of the Customer"; a collection of hundreds of survey responses from IT leaders that have or plan to upgrade to R12.2, top challenges, objectives, and timelines, etc.
In today's world its critical to have visibility on every task that is delegated to another employee and it more important to collaborate effectively to achieve the common goal of an organization. Q-Track task management software helps you to achieve the same.
This document discusses different software development life cycle (SDLC) models including iterative and spiral models. The iterative model involves building a product incrementally in iterations, with requirements evolving in each iteration based on user feedback. The spiral model similarly progresses in iterations but places more emphasis on risk analysis. Each spiral involves planning, risk analysis, engineering, and evaluation phases. The document also covers advantages and disadvantages of each model, as well as the role of management in software projects, including planning, monitoring and control, and termination analysis.
This document discusses integrating ALM (Application Lifecycle Management) and PLM (Product Lifecycle Management) systems through business process management. It outlines key capabilities like process integration to manage hardware and software requirements together. Connecting ALM and PLM can be done through data exchange and workflow integration. Benefits include increased transparency, faster time to market, cost savings, and end-to-end traceability across all product assets. The presenter invites attendees to a follow up webinar on achieving gapless end-to-end traceability.
Capstone Technology Canada - Advanced Process Control Project Lifecyclemorinsteve_capstone
Capstone Technology Canada is an advanced process control technology company focusing on the oil and gas industry. It provides various advanced process control solutions including model predictive control, multivariate modeling, and real-time optimization. Capstone follows a standard 9-step APC project lifecycle that includes benefits analysis, dynamic modeling, controller design, commissioning, training, and long-term maintenance. Its experienced team and local presence help ensure successful APC implementation and utilization.
This document provides information about software testing presented by Rounak Shaik from Talent Sprint. It discusses the basics of software testing including an overview, importance in the IT industry, career growth, and the future workplace. It defines key terms like errors, defects, bugs, and failures. It describes why testing is important for quality, delivering error-free systems, and avoiding costs from failures. It also outlines different types of testing like manual and automation testing.
Isha Training Solutions Presents "Performance Engineering" course.
For Course content and other information, pls follow below link
http://ishatrainingsolutions.org/performance-engineering/
Live project support is provided on any performance testing tools and also any protocols under the roof -- Call me or Whatsapp me on +91-8019952427
-----------------------------------------------------------------------------------------------------------------------------------
Other Courses Offered by ISHA
1) Performance Engineering Course
http://ishatrainingsolutions.org/performance-engineering/
2) Cloud Performance Engineering in DevOps - Core to Master Level http://ishatrainingsolutions.org/cloud-performance-engineering-devops-the-complete-course/
3) AppDynamics
http://ishatrainingsolutions.org/app-dynamics/
4) Dynatrace
http://ishatrainingsolutions.org/dynatrace-training/
5) Jmeter Core to Master Level
http://ishatrainingsolutions.org/jmeter-core-to-master-level-course/
6) Performance Testing using LoadRunner
http://ishatrainingsolutions.org/microfocus-loadrunner/
7) Advanced LoadRunner
http://ishatrainingsolutions.org/advanced-scripting/
8) Web Services Performance Testing using LoadRunner http://ishatrainingsolutions.org/performance-testing-of-webservices-using-loadrunner-recorded-videos/
9) SAPGUI protocol - Performance Testing for SAP applications Using LoadRunner http://ishatrainingsolutions.org/loadrunner-sap-web-protocol/
10) TruClient Protocol Using LoadRunner
http://ishatrainingsolutions.org/true-client-protocol/
11) Mobile Performance Testing using LoadRunner and JMeter http://ishatrainingsolutions.org/mobile-performance-testing-using-loadrunner/
12) Performance Testing using NeoLoad
http://ishatrainingsolutions.org/performance-testing-using-neoload/
13) Splunk
http://ishatrainingsolutions.org/splunk-training/
14)Selenium
http://ishatrainingsolutions.org/2792-2/
*********************************
For further details, pls contact me.
Contact : Kumar Gupta
Call : +91-8019952427/
Whatsapp : +91-8019952427
kgupta.testingtraining@gmail.com
The document describes the key activities and concepts in software development processes including requirements analysis, specification, architecture, design, implementation, testing, deployment, and maintenance. It discusses various process models like waterfall, agile, iterative, RAD, and XP. It also covers supporting disciplines such as configuration management, documentation, quality assurance, and project management as well as development tools.
The document discusses how traditional product design methods often do not optimize key parameters like geometry, motion, forces, and tolerances. This can lead to increased costs from changes later in the design process. It introduces Enventive software as a way to model and optimize these parameters earlier on through computer simulations. Enventive allows users to represent functional intent, model kinematics and forces, perform tolerance analysis, and optimize parameters. This helps users design products that are more effective, manufacturable, and cost-efficient.
10 Ways to Better Application-Centric Service ManagementLinh Nguyen
Many IT organizations suffer from the nagging problems of Availability and Performance Management. In this presentation we will detail 10 Ways to Better Application-Centric Service Management, particularly with SAP environments.
Vishal Sarode has over 9 years of experience in SAP Basis administration. He has experience implementing, supporting, and upgrading SAP systems. Some of his responsibilities include user management, transport management, system monitoring, troubleshooting, and conducting system upgrades. He is proficient in AIX, Windows, and Z/OS operating systems and has experience with various SAP modules.
July webinar l How to Handle the Holiday Retail Rush with Agile Performance T...Apica
In this Q&A-style webinar, you'll learn:
1. How and why to load test at least three months prior to the holidays
2. How to integrate CI/CD into your holiday load testing
3. How to determine and evaluate load curves
The document provides guidance on implementing an enterprise system in 6 main steps: 1) Project management to set up the team and scope, 2) Preparation including training, data collection, and setup, 3) Build the customer-specific implementation by configuring the system, 4) Prepare for roll-out with user documentation and training, 5) Deployment and go-live, and 6) Support and manage enhancement requests after launch. It emphasizes preparation, training, and a phased approach to ensure success.
The document proposes moving to a new Active Directory structure to make maintenance and monitoring easier while reducing costs. Consolidating servers across 40 stores could save ~40,000 per new store in hardware costs and ~5,40,000 per year in monitoring costs. Introducing single points of contact and proactive maintenance of Active Directory and other services through tools like Intel PAP would help address current problems of multiple contacts, lack of accountability, and speed up response and resolution times for IT issues.
The document discusses performance testing and provides details about:
1) The objectives of performance testing including validating requirements, checking capacity, and identifying issues.
2) The differences between performance, load, and stress testing.
3) Why performance testing is important including checking scalability, stability, availability, and gaining confidence.
4) Parameters to consider in performance testing like throughput, latency, efficiency, and degradation.
5) Potential sources of performance bottlenecks like the network, web server, application server, and database server.
In this presentation which was delivered to testers in Manchester, I help would-be performance testers to get started in performance testing. Drawing on my experiences as a performance tester and test manager, I explain the principles of performance testing and highlight some of the pitfalls.
Integration strategies best practices- Mulesoft meetup April 2018Rohan Rasane
The document discusses best practices for integration strategies including using an integration platform, designing integrations, and implementing resiliency patterns. It recommends having an integration platform to provide features like batch processing, loose coupling, reuse, governance, and security. When designing integrations, questions about data, users, transactions, orchestrations, and future needs should be considered. Common resiliency patterns discussed are timeouts, circuit breakers, bulkheads, retries, and idempotency.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
This document discusses various concepts related to ensuring adequate performance of IT infrastructure. It covers perceived performance from an end user perspective and how to account for performance during infrastructure design. Methods discussed for evaluating performance during the design phase include benchmarking, leveraging vendor experience, prototyping, and user profiling. The document also addresses managing performance of running systems through techniques like performance testing, identifying and addressing bottlenecks, leveraging caching, and scaling infrastructure through vertical and horizontal expansion approaches like load balancing.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Beit 381 se lec 15 - 16 - 12 mar27 - req engg 1 of 3babak danyal
The document provides an overview of requirements engineering as the first stage of the software development process. It discusses how requirements are initially vague and ill-defined, and must be precisely defined to guide implementation. Requirements engineering involves elicitation, analysis, and specification, with the output being a Software Requirements Specification document. The document outlines key aspects that should be included in an SRS, such as functional requirements, data requirements, performance requirements, design constraints, and guidelines. It also discusses techniques for requirements analysis like use case modeling and data flow diagrams.
The document discusses context-driven performance testing. It advocates for early performance testing using exploratory and continuous testing approaches in agile development. Testing should be done at the component level using various environments like cloud, lab, and production. Load can be generated through record and playback, programming, or using production workloads. Defining a performance testing strategy involves determining risks, appropriate tests, timing, and processes based on the project context. The strategy is part of an overall performance engineering approach.
The document discusses key concepts for designing IT infrastructure to ensure high performance. It covers perceived performance from a user perspective, benchmarking systems, profiling users to predict load, identifying and managing bottlenecks, scaling systems horizontally and vertically, load balancing, caching frequently used data, and designing systems based on their intended use to optimize performance. The overall goal is to design infrastructure that can meet performance requirements under all conditions, both currently and as load increases over time.
The document provides an overview of performance testing, including:
- Defining performance testing and comparing it to functional testing
- Explaining why performance testing is critical to evaluate a system's scalability, stability, and ability to meet user expectations
- Describing common types of performance testing like load, stress, scalability, and endurance testing
- Identifying key performance metrics and factors that affect software performance
- Outlining the performance testing process from planning to scripting, testing, and result analysis
- Introducing common performance testing tools and methodologies
- Providing examples of performance test scenarios and best practices for performance testing
Load testing with Visual Studio and Azure - Andrew SiemerAndrew Siemer
In this presentation we will look at what web performance testing is and the various types of testing that can be performed. We will then dig into Visual Studio 2013 Ultimate to see that the Visual Studio platform is now a real contender in performance testing automation. And we will see how the Visual Studio integration with Visual Studio Online and Azure can take your web performance tests and spin up impressive load tests in a truly useful way.
- JMeter is an open source load testing tool that can test web applications and other services. It uses virtual users to simulate real user load on a system.
- JMeter tests are prepared by recording HTTP requests using a proxy server. Tests are organized into thread groups and loops to simulate different user behaviors and loads.
- Tests can be made generic by using variables and default values so the same tests can be run against different environments. Assertions are added to validate responses.
- Tests are run in non-GUI mode for load testing and can be distributed across multiple machines for high user loads. Test results are analyzed using aggregated graphs and result trees.
The document describes the ADF Performance Monitor, a tool for measuring, analyzing, and improving the performance of Oracle Application Development Framework (ADF) applications. It collects metrics on response times, health, and resource usage. Issues are reported in dashboards and JDeveloper. It helps detect, analyze, and resolve common and uncommon problems. Implementation takes less than a day. The overhead is 3-4% and it can be turned on/off without overhead. It supports diagnosing specific users, errors, slow queries, and memory usage to quickly find problems.
The document discusses SDLC (Systems Development Life Cycle) and e-business. It begins by defining key terms like system, information system, and problem identification. It then explains various phases of SDLC like planning, analysis, design, implementation, testing and maintenance. It also discusses different SDLC models like waterfall, iterative and agile. The document also covers topics like requirements analysis, feasibility study, design and testing. Finally, it provides definitions of business, commerce and e-business and discusses how ICT technologies help in integrating business processes and enabling e-business.
Alexander Podelko - Context-Driven Performance TestingNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Performance testing is done to determine a system's responsiveness under different loads. It aims to optimize user experience. Types of performance testing include load, stress, soak/endurance, volume, scalability, and spike testing. The goals are to assess production readiness, compare platforms, evaluate configurations, and check against criteria. Pre-requisites include a stable test environment similar to production. The testing process involves establishing baselines and benchmarks, running tests, and analyzing results to identify bottlenecks and decide on fixes. Common issues relate to servers, databases, networks, and applications. Optimization involves improvements, upgrades, and tuning. Challenges include setting up the test environment and analyzing large amounts of test data.
This document discusses project management principles and processes. It covers topics such as the importance of project management, knowledge areas, project identification and planning, risk management, and project execution. The document provides examples of projects and defines characteristics that distinguish projects from routine tasks. It also discusses project life cycles, activities involved in project execution like requirements analysis and testing, and potential problems in software projects.
The document provides recommendations for monitoring tools, best practices, and support procedures for Oracle EBS production support. It recommends implementing Oracle monitoring tools, maintaining checklists and documentation, following password and change management policies, and validating backups. It also provides best practices for concurrent managers, such as defining work shifts, caching requests, and purging obsolete workflow data. Log and output files should be archived according to retention policies.
The document discusses performance tuning for Grails applications. It outlines that performance aspects include latency, throughput, and quality of operations. Performance tuning optimizes costs and ensures systems meet requirements under high load. Amdahl's law states that parallelization cannot speed up non-parallelizable tasks. The document recommends measuring and profiling, making single changes in iterations, and setting up feedback cycles for development and production environments. Common pitfalls in profiling Grails applications are also discussed.
Similar to Performance Assurance for Packaged Applications (20)
Continuous Performance Testing: Challenges and ApproachesAlexander Podelko
Integrating performance testing into agile development processes presents several challenges. Continuous performance testing aims to automate performance tests run during each build or iteration. This helps detect performance regressions early. However, it requires optimizing test coverage given time and resource constraints. Other challenges include reducing noise in test results, detecting changes between builds, advanced analysis of results, and defining organizational roles and responsibilities for maintaining tests. The document discusses these challenges and how companies like MongoDB have implemented continuous performance testing in their development pipelines to address them.
Multiple Dimensions of Load Testing, CMG 2015 paperAlexander Podelko
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Load testing is an important part of the performance engineering process. However the industry is changing and load testing should adjust to these changes - a stereotypical, last-moment performance check is not enough anymore. There are multiple aspects of load testing - such as environment, load generation, testing approach, life-cycle integration, feedback and analysis - and none remains static. This presentation discusses how performance testing is adapting to industry trends to remain relevant and bring value to the table.
Continuous Performance Testing: Myths and RealitiesAlexander Podelko
The document discusses continuous performance testing in the context of agile development. It notes that while continuous performance testing is becoming more common, it remains a challenge to fully integrate into continuous integration processes. Different approaches like record and playback automation, API scripting, and exploratory testing each have advantages and disadvantages depending on the system and development context. Fully automating all performance tests may not be realistic, so a tiered approach with some simple automated tests alongside more extensive manual testing is often needed. The key is finding the right balance and mix of approaches for each unique situation.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
Load Testing: See a Bigger Picture, ALM Forum, 2014Alexander Podelko
The document discusses different approaches to load testing, including load generation techniques like record and playback, real users, programming, and mixed approaches. It also covers load testing environments such as lab vs cloud vs production environments. Finally, it provides an overview of various load testing tools and how their suitability depends on factors like the technologies being tested and testing needs. The key message is that load testing is an important part of performance risk mitigation but requires choosing the right approach and tools based on the specific testing situation.
This document summarizes a presentation on web performance given by Alexander Podelko at WebPerfDays New York 2013. The presentation covered performance basics, the importance of considering both front-end and back-end performance, and different approaches to performance risk mitigation. However, the presenter argued that load testing is still needed to complement these approaches, as it is the only way to verify that a system can handle expected load levels and identify potential multi-user issues. Load testing was discussed in more detail, with examples of how it can be used for performance optimization and debugging. The presentation concluded by emphasizing the importance of taking a holistic, end-to-end view of performance.
The document provides a short history of performance engineering, beginning in the 1960s with the introduction of instrumentation tools for mainframe systems and the first studies of human response times. Key developments include the establishment of the performance engineering community in the 1970s, the first commercial performance analysis tools and distributed computing in the late 1970s, and the publication of early books on software performance engineering and applying existing expertise to web performance in the 1990s. The history shows that performance has been an ongoing concern across different computing paradigms, with new challenges arising with each new technology.
Performance Requirements: CMG'11 slides with notes (pdf)Alexander Podelko
Performance requirements should be tracked throughout a system's entire lifecycle, from inception through design, development, testing, operations, and maintenance. However, different groups involved at each stage use their own terminology and metrics, making performance requirements confusing. The document aims to provide a holistic view of performance requirements by discussing key metrics like throughput, response time, and concurrency used across the lifecycle. It also addresses issues like ensuring requirements are defined consistently regardless of changing workloads or system optimizations.
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. Still it is important to see a bigger picture beyond stereotypical last-moment load testing. There are different ways to create load; a single approach may not work in all situations. Many tools allow you to use different ways of recording/playback and programming. This session discusses pros and cons of each approach, when it can be used and what tool's features we need to support it.
Performance Requirements: the Backbone of the Performance Engineering ProcessAlexander Podelko
Performance requirements should to be tracked from system's inception through its whole lifecycle including design, development, testing, operations, and maintenance. They are the backbone of the performance engineering process. However different groups of people are involved in each stage and they use their own vision, terminology, metrics, and tools that makes the subject confusing when you go into details. The presentation discusses existing issues and approaches in their relationship with the performance engineering process.
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Coordinate Systems in FME 101 - Webinar SlidesSafe Software
If you’ve ever had to analyze a map or GPS data, chances are you’ve encountered and even worked with coordinate systems. As historical data continually updates through GPS, understanding coordinate systems is increasingly crucial. However, not everyone knows why they exist or how to effectively use them for data-driven insights.
During this webinar, you’ll learn exactly what coordinate systems are and how you can use FME to maintain and transform your data’s coordinate systems in an easy-to-digest way, accurately representing the geographical space that it exists within. During this webinar, you will have the chance to:
- Enhance Your Understanding: Gain a clear overview of what coordinate systems are and their value
- Learn Practical Applications: Why we need datams and projections, plus units between coordinate systems
- Maximize with FME: Understand how FME handles coordinate systems, including a brief summary of the 3 main reprojectors
- Custom Coordinate Systems: Learn how to work with FME and coordinate systems beyond what is natively supported
- Look Ahead: Gain insights into where FME is headed with coordinate systems in the future
Don’t miss the opportunity to improve the value you receive from your coordinate system data, ultimately allowing you to streamline your data analysis and maximize your time. See you there!
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
3. Oracle Enterprise Performance Management
• An integrated suite of applications
• The integration is good, so from the end user point of
view it may be difficult to say what components and
datasources are used
– And especially for those not quite familiar with EPM like
administrators or performance testers
• A good example of packaged business applications to
discuss performance assurance and troubleshooting
Disclaimer: The views expressed here are personal views only and do not necessarily represent those of authors’ current or
previous employers. All brands and trademarks mentioned are the property of their owners.
5. Performance Assurance
• EPM products are thoroughly tested, but they may be
used very differently
– Think about Oracle Database: It is still possible to create a
slow database in spite of the fact that the Oracle Database
software is highly optimized for performance.
• Performance Assurance
– Ongoing performance risk mitigation during the whole system
lifecycle
6. Performance Assurance Steps
• Define performance requirements
– Number of users, concurrency, what they do
• Design your applications according to best practices
• Verify performance along the way
– Single user with monitoring provides a lot of information
– Use realistic volume of data
• Do necessary tuning / configuration
7. Performance Assurance Steps - Continued
• Do performance testing
– Closely monitor the system in the process
– If results are not satisfactory, go back to tuning or design
• Adjust configuration based on performance testing
results [and re-test]
• Monitor the system in production
– Check if the pattern is the same as in testing
– Check trends
• Do performance testing of major changes and re-
designs
8. Performance Requirements
• Workload
– Number of concurrent users
• From existing systems
• Percentage of named users
– What users will do
• Throughput (how many reports, requests, etc)
• What components they use
• Performance Metrics
– Response times
– Resources utilization
9. Application Design
• Application design impacts performance drastically
• Adhere to best practices to ensure application
performance
– Should be applicable to your context
– You don’t need to follow them blindly, but it would be better to
have reasons why you need to deviate and do a Proof of
Concept to check how it will perform
– First of all, reasonable number of dimensions, number of
members, size of forms, depth of hierarchy, etc.
10. Verify Performance Along the Way
• If you see a performance issue with one user, it probably
would be much worse for multiple users
– Still may be exceptions related to latency or caching
– Tuning is not usually beneficial for single-user issues
• Except, for example, very large objects
– Hardware upgrade is usually not beneficial for single-user issues
• Except, for example, cpu speed
– It means that single-user results in the development environment
may be representative even if hardware used is significantly less
powerful
– If one user uses a lot of resources, it won’t be nice for multiple
users
11. Tuning
• If you have more than a dozen of concurrent users
you may need to do some tuning
– Some defaults are chosen to conserve resources
– Increase max Java heap size for heavily used components
– Increase Essbase index and data cache sizes
• It is not recommended to do all existing tuning
recommendation without understanding what they
mean and testing them under multi-user load
– Many recommendations are for specific cases only and may
even degrade performance otherwise.
12. Time Considerations
• If any long-running, resources-consuming tasks are
needed, schedule them for the time of minimal load
– Such as large report books, consolidations, calculations
• EPM activities usually tied to the financial cycle
– As a part of closing quarter, year, etc.
– Heavy activity during some period and low during others
• Some EPM activities depends on others
– Workload mix changes depending on the place in the
financial cycle
– May be several different workload profiles
13. Performance Testing
• EPM products are tested for performance
– But it doesn’t guarantee performance of a specific application
• Every application is different
• Performance testing of your application is a way to
alleviate performance risk
– Highly recommended for large installations
• Performance testing of EPM products is complex
– If done improperly, may easily led to wrong conclusions
– Make sure that everything is correlated/parameterized
properly if not using Oracle consulting
14. Creating Realistic Load
• Important part of performance testing is creating
realistic load
• The number of virtual users
• What users do
• Different user names, different POV, etc (script
parameterization)
15. Scripting Challenges
• Multiple variables to be correlated
– Including SSO token, repository token, etc.
– Specific for every component
• Load testing tools may not report errors when
correlation or parameterization is incorrect, but the
system behavior would be unpredictable
– Use other ways to check if the script works, such as checking
for specific server response, application logs, system state.
16. Sizing and Capacity Planning
• The Installation Start Here document provides typical
configurations for some products for 100, 500, and
1000 users with 35% concurrency
• Many difficult to formalize factors
– Use Oracle services
• Benchmarking documents are usually not good for
sizing
– Benchmarking application may be very different
• More rigorous approach is to use modeling
– Based on the amount of resources needed per transaction for
your application
– Done as a service by Oracle consultants
18. The System is Slow
• Typical knee-jerk reactions are usually not effective
– Add more hardware
• May help only if the bottleneck is lack of the specific
resource
• Even if more CPU power is needed, it is difficult to guess
where and how much without analysis
– Submitting a support Service Request
• Product defects rather rare comparing with configuration
and application design issues
• Nothing can be figured out until the problem is analyzed
and narrowed down
– Submitting a vague SR may slow down investigation
19. What Can Be a Problem?
• Lack of hardware resources
– May be network bandwidth, CPU resources, memory, I/O
• Tuning / configuration
– On all tiers including network, operating
system, storage, application
• Application design
– May be any part including metadata, forms, rules, etc.
• Product issue
– Rare comparing with above issues
20. Performance Troubleshooting
• Use Top Down approach
• Investigate step by step, narrowing the problem
• What exactly and where exactly is slow?
– Is it slow for one user?
– Does it change with time?
– What components are active (see monitoring results)?
– Do you see slowness in back end?
• For example, in logs
– What activity or data it is related too?
• For example, is it related to a specific web form?
21. Monitoring
• Ongoing monitoring of all components
– A way to check system health
– Input for future changes / capacity planning
– Input for performance troubleshooting
• May be done using most enterprise monitoring tools
– May be done with OS-level tools, although it is usually not the
best choice for ongoing production monitoring.
• What to monitor?
– System-level metrics (CPU, memory, I/O, network)
– Process-level metrics for major components (CPU, memory)
– Database metrics
22. Component Diagrams
• Understanding of how requests flow through the
system is very important for all performance-related
questions:
– Distributing components over hardware
– Monitoring
– Performance testing
– Performance troubleshooting
23. EPM Requests Flow
• The collaboration between components is really
sophisticated, but not all of them are equally critical
from the performance point of view
– Focus on high-concurrency user requests
– Simplified component diagrams are presented here to
highlight high-concurrency components
– See the “Installation Starts Here” manual for more detailed
diagrams
24. HFM Components - Simplified
Browser SmartView Win Client
OHS
Foundation
HFM Web Server
HFM App Server
Relational DB
25. Planning Components - Simplified
Browser SmartView
OHS
Foundation
Planning
Essbase
Relational DB ProviderServices
26. Financial Reporting - Simplified
Browser
Planning Essbase
OHS
Foundation R&A WebServer
FR Web App
HFM
Relational DB FR Print Server
R&A Services
27. Mapping to System Processes
• Each component may be mapped to one or several system
processes
• Most Web applications are represented by HyS9<name>
processes on Windows and Java processes on *unix.
– Use ps –ef| grep <name> on *unix to find PID
• Key processes for HFM are HsvDataSource (one per
application*) for App server, w3wp for Web server
• Key processes for Essbase are ESSSVR (one per
application*)
* application, according to the traditional EPM terminology, refers to a specific
implementation for the given product
28. Essbase Application Logs
[Fri May 13 10:56:09 2011]Local/gsi1/Plan1/admin/6844/Info(1003037)
Data Load Updated [30507] cells
[Fri May 13 10:56:09 2011]Local/gsi1/Plan1/admin/6844/Info(1003051)
Data Load Elapsed Time for [SQL] with [AIFData.rul] : [6.516] seconds
[Fri May 13 10:09:05 2011]Local/gsi1/Plan1/admin/6500/Info(1020055)
Spreadsheet Extractor Elapsed Time : [0.157] seconds
[Fri May 13 10:09:05 2011]Local/gsi1/Plan1/admin/6500/Info(1020082)
Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [0] non-
Dyn.Calc.Cache : [0]
[Fri May 13 10:12:41 2011]Local/gsi1/Plan1/admin5308/Info(1020055)
Spreadsheet Extractor Elapsed Time : [0.031] seconds
30. Service Request
• If you still believe that it is a product issue, submit SR
with full results of your analysis, monitoring details,
and logs
– Statement “system is slow” is not enough
• There may be additional tools for investigation, such
as debug and profiling flags, but they need to driven
by Oracle support
32. Exemplary Performance Issues
• Let’s discuss several typical performance problems
– Each has a recognizable pattern
– Happens often enough to be recognized
33. Examples: CPU Issues
• Using [almost] all available CPU
• May indicate lack of hardware resources
– Add more servers for the component
– Verify that adding hardware will solve the problem
34. Example: Dynamic Members in Essbase
• High Essbase CPU during concurrent reading
• Dynamic members are very useful in some cases, but
they are recalculated each time they retrieved
• That actually means that they shouldn’t be used if
retrieved concurrently
– Only in cases when they are retrieved occasionally
• Solution: make dynamic members retrieved
concurrently stored or remove them from concurrent
activities such as reports / web forms
35. Examples: Memory Issues
• Servers should have enough memory for all components
– Using all machine memory (paging, swapping) kills performance
• 32-bit application process memory is limited
– Windows 2GB (3GB in some cases)
– Memory-hungry applications may benefit from 64-bit
• Java application memory consumption is defined by JVM
settings
– Max heap size –Xmx
– Monitor actual heap size
36. Examples: I/O Issues
• Planning writes back – the same requirements as for
OLTP systems
• Relational databases could have high I/O
– HFM, FDM, ERPI
• Striping
– The best RAID performance is with striping of data across
multiple drives (RAID-0), which may be combined with
mirrored disks (RAID-0+1 or RAID-10)
– No RAID-5
• Split index from data files from control files along
different I/O channels if possible
37. Examples: Network Issues
• EPM provides rich web interface improving user
experience
– It may be not performing too well over networks with high
latency or low bandwidth (remote offices)
– It should checked in every situation when users are not in the
same LAN as servers
• For example, real network bandwidth measured and
compared with network throughput generated by a user
– If it is the case, software/hardware HTTP compression may
alleviate the problem
• Another solution maybe using Citrix/Remote Desktop
38. Scripting Example: HFM Consolidation
• Need a loop to be created in the script
web_custom_request("XMLDataGrid.asp_4",
"URL=http://{WebSrv}/hfm/Data/XMLDataGrid.asp?Action=PROCMGT
EXECUTE&TaskID={ConsolMode}&Rows=0&ColStart=0&ColEnd=0&S
elType=1&Format=JavaScript",...
do {
sleep(3000);
web_reg_find("Text=1", "SaveCount=abc_count", LAST);
web_custom_request("XMLDataGrid.asp_5",
"URL=http://{WebSrv}/hfm/Data/XMLDataGrid.asp?Action=GETCONS
OLSTATUS",…}
while(strcmp(lr_eval_string("{abc_count}"), "1") == 0);
39. Scripting Example: HFM Web Data Entry Forms
• To parameterize, we need not only department names,
but also department IDs from the repository
web_submit_data("WebFormGenerated.asp","Action=http://hfmtest.us
.schp.com/HFM/data/WebFormGenerated.asp?FormName=Tax+Q
FP",
ITEMDATA,
"Name=SubmitType", "Value=1", ENDITEM,
"Name=FormPOV", "Value=TaxQFP", ENDITEM,
"Name=FormPOV", "Value=2007", ENDITEM,
"Name=FormPOV", "Value=Periodic", ENDITEM,
"Name=FormPOV", "Value={department_name}", ENDITEM,
"Name=MODVAL_19.2007.50331648.1.{department_id}.14.409.21
30706432.4.1.90.0.345", "Value=<1.7e+2>;;", ENDITEM, LAST);
40. Summary
• Performance Assurance is ongoing performance risk
mitigation during the whole system lifecycle
– Including design, development, testing, and production
• Performance testing of your application is a way to
alleviate performance risk
• Performance testing of EPM products is not
straightforward
• Use top down approach for performance
troubleshooting
Editor's Notes
Oracle Enterprise Performance Management (EPM) System includes a suite of performance management applications, a suite of business intelligence (BI) applications, a common foundation of BI tools and services, and a variety of datasources – all integrated using Oracle Fusion Middleware.
Performance Assurance for EPM is ongoing performance risk mitigation during the whole system lifecycle. EPM products are thoroughly tested for performance, but performance of specific implementations depends on how they are designed and constructed (metadata, data, forms, grids, rules, etc.- all these artifacts are different for each implementation).
The steps listed are just an outline, some steps will be discussed in more details later in this presentation.
The main point that all these activities should continue through the whole system lifecycle and the same performance metrics should be tracked through all steps.
Performance requirements are supposed to be tracked from the system inception through the whole system lifecycle including design, development, testing, operations, and maintenance. However different groups of people are involved on each stage using their own vision, terminology, metrics, and tools that makes the subject confusing when going into details.Throughput is the rate at which incoming requests are completed. Throughput defines the load on the system and is measured in operations per time period. It may be the number of transactions per second or the number of reports per hour. In most cases we are interested in a steady mode when the number of incoming requests would be equal to the number of processed requests. The number of users doesn’t, by itself, define throughput. Without defining what each user is doing and how intensely (i.e. throughput for one user), the number of users doesn’t make much sense as a measure of load. What users do also defines what components and how intensely they use.
For example, both very deep member hierarchies and flat member hierarchies may cause issues under load.See documentation and best practices documents for details for specific applications.
Very large objects (web forms, reports) may require some tuning, like increasing JVM heap size, even for one user. Hardware upgrade (with exception of cpu speed) is usually not beneficial for single-user issues– assuming that there is no inherent issues with hardware configuration like memory is so small that it starts paging even with one user.
Multiple tuning documents are available and should be checked for details. For example:Essbase Database Administrator Guide, Optimizing EssbaseHyperion Financial Management (HFM) Performance Tuning Guide, Fusion Edition (Doc ID 1083460.1)
In cases of any long-running, resources-consuming tasks it may be more efficient just to schedule them for the time of minimal load instead of trying to tune and optimize them to run in parallel with high-concurrency load.
It is impossible to predict performance of your application without at least some performance testing.
Running multiple users hitting the same set of data (with same Point of View, POV) is an easy way to get misleading results. If it is for reporting, the data could be completely cached and we get much better results than in production. If it is, for example, for web data entry forms, it could cause concurrency issues and we get much worse results than in production. So scripts should be parameterized (fixed or recorded data should be replaced with values from a list of possible choices) so that each user uses a proper set of data. The term “proper” here means different enough to avoid problems with caching and concurrency, which is specific for the system, data, and test requirements.
Unfortunately, a lack of error messages during a load test does not mean that the system worked correctly. A very important part of load testing is workload verification. We should be sure that the applied workload is doing what it is supposed to do and that all errors are caught and logged. It can be done directly by analyzing server responses or, in cases when this is impossible, indirectly. For example, by analyzing the application log or database for the existence of particular entries.
The suggested “typical�� configuration are for average applications designed according to best practices. As far as performance heavily depends on the way applications are implemented, it is difficult to properly size applications that are unique in one or more ways (and many are) without collecting at least some performance information.
Investigate before act. “Shooting in the dark” rarely helps, but adds frustration.
It may be many reasons for bad performance, including lack of hardware resources, inadequate tuning or configuration, issues with custom application design, or even an issue with the product itself (which is relatively slow). And, of course, it may be a combination of issues.
One complication may be that it could be several performance issues disguising each other. It makes investigation more difficult, but still there is no other way as identify and fix every issues one by one. No magic bullets here.
Monitoring may be done with OS-level tools (such as Performance Monitor for Windows and vmstat, ps, sar for UNIX), although it is usually nor the best choice for ongoing production monitoring. Things to monitor: system-level resource utilization metrics, process-level metrics for key processes, database metrics.
Understanding what component is doing what is very important. During performance testing, for example, you need to know what components you need to pay attention to. And, vise versa, seeing activity on a component during monitoring, you may guess what kind of workload may cause this activity.
It doesn’t mean that other components never had performance issues – it just mean that they are used mostly by few users or for one-time kinds of activities, usually with low concurrency. Due to the time limitation, only the most high-concurrency products and paths are discussed. The presentation mainly discusses the products typically having the highest concurrency in most EPM implementations: Hyperion Planning, Hyperion Financial Management, Hyperion Essbase, and reporting solutions (Hyperion Financial Reporting and Hyperion SmartView). Actually a detailed discussion even about a single product hardly may fit a single presentation timeframe, so here these products are mentioned rather as examples to illustrate the advocated approaches. Further details could be found in manuals and product-specific documents.More information in the Component Architecture documents at http://www.oracle.com/technetwork/middleware/bi-foundation/resource-library-090986.html
This is a simplified HFM component diagram for the components and flows usually involved in high-concurrency transactions. The components needed most attention from the performance point of view highlighted with yellow and red glow.The choice of components / highlighting is based on the author personal experience only and was simplified to fit presentation slides. Other components may be important from performance point of view too.OHS stands for Oracle HTTP Server.*Foundation consisted of two components, Shared Services and Workspace, before version 11.1.2.
The main components for Planning from the performance point of view are Planning Web application (a J2EE application) and Essbase as its main datastore. Relational Database is used mostly as the repository, so usually is not a bottleneck.
The main components here from the performance point of view are Financial Reporting Web application and data sources.To illustrate the importance of request flow understanding: Financial Reporting Print server is used only for pdf printing. So it is one of the most important components to monitor if pdf printing is involved and completely irrelevant if there is no pdf printing.*There were three components (Financial Reporting Web applications server, Reports Server and Scheduler Server – last two standalone Java applications) instead of single Financial Reporting Web applications server before version 11.1.2.
Each component may be mapped to one or several system processes. Most Web applications are represented by HyS9<name> processes on Windows and Java proceses on *unix. Use ps –ef| grep <name> on *unix to find PID for specific component.Key processes for HFM applications server are HsvDataSource and for Essbase - ESSSVR. One such process is spawn per application, so it may be multiple such processes (while orchestrating HsxServer and ESSBASE processes respectively usually don’t use much resources). Key process for HFM Web server is w3wp.A combination of all artifacts, including metadata, data, forms, rules, etc. is traditionally referred in EPM as an application. It creates some terminological confusion: the product itself may be referred to as an application and one specific implementation inside such product is referred as an application. Talking about performance assurance in this presentation we usually mean an implementation for the given product.
Essbase application logs provides timing for all transactions. Look for ‘Elapsed Time’ records.
Started and ended times for many HFM tasks may be found in Task Audit (data retrieval only for Financial Reporting) in most convenient form. In the logs it would be separate records for starting and for ending tasks.
The more issue would be investigated and narrowed down, the more chances that support would be able to help.
Many issues have a very recognizable pattern and happen often enough to be aware about them.
Verify that adding hardware will solve the problem. For example, if the server is maxed out with 150 users and you need to support 200 users, there is a good chance that adding a second server will solve the problem (to be sure it need to be tested). However, if the server is maxed out with 10 users and you need to support 200 users, it is better to re-visit design and tuning; adding hardware doesn’t look like a good option.
Dynamic members is an example of issues that can’t be found without multi-user workload. It may be fine with one user and expose itself only under concurrent load.
To investigate JVM memory issues in most cases you need to monitor actual heap size (that usually require additional tools, some comes with Application Servers). In some cases Java process memory may be monitored if initial –Xms and maximum –Xmx heap size set to different values, but results may be obscured by the way OS manage memory.
HTTP compression adds overheads, so it may be not a good solution for LAN users.
What each request is doing is defined by the ?Action= part. In some context/versions, during the recording, you get multiple GETCONSOLSTATUS requests, the number of GETCONSOLSTATUS requests recorded depends on the processing time. If playback such script, it will work in the following way: the script submits the consolidation in the EXECUTE request and then calls GETCONSOLSTATUS three times. If we have a timer around these requests, the response time will be almost instantaneous. While in reality the consolidation may take many minutes or even hours (yes, this is a good example when sometimes people may be happy having one hour response time in a Web application). If we have several iterations in the script, we will submit several consolidations, which continue to work in background competing for the same data, while we report sub-second response times. Consolidation scripts require creating an explicit loop around GETCONSOLSTATUS to catch the end of consolidation.
Another example is HFM Web Data Entry Forms. To parameterize such script, we need not only department names, but also department ids (which are internal representation not visible to users – should be extracted from the metadata repository). If department ids are not parameterized, the script won’t work – although no errors will be reported.