This document provides guidance and best practices for migrating database workloads to infrastructure as a service (IaaS) in Microsoft Azure. It discusses choosing the appropriate virtual machine series and storage options to meet performance needs. The document emphasizes migrating the workload, not the hardware, and using cloud services to simplify management like automated patching and backup snapshots. It also recommends bringing existing monitoring and management tools to the cloud when possible rather than replacing them. The key takeaways are to understand the workload demands, choose optimal IaaS configurations, leverage cloud-enabled tools, and involve database experts when issues arise to address the root cause rather than just adding resources.
This document discusses managing storage across public and private resources. It covers the evolution of on-site storage management, storage options in the public cloud, and challenges of managing hybrid cloud storage. Key topics include the transition from siloed storage to software-defined storage, various cloud storage services like object storage and block storage, challenges of public cloud limitations, and solutions for connecting on-site and cloud storage like gateways, file systems, and caching appliances.
Kudu: Resolving Transactional and Analytic Trade-offs in Hadoop
Kudu is a new column-oriented storage system for Apache Hadoop that is designed to address the gaps in transactional processing and analytics in Hadoop. It aims to provide high throughput for large scans, low latency for individual rows, and database semantics like ACID transactions. Kudu is motivated by the changing hardware landscape with faster SSDs and more memory, and aims to take advantage of these advances. It uses a distributed table design partitioned into tablets replicated across servers, with a centralized metadata service for coordination.
A brave new world in mutable big data relational storage (Strata NYC 2017)
The ever-increasing interest in running fast analytic scans on constantly updating data is stretching the capabilities of HDFS and NoSQL storage. Users want the fast online updates and serving of real-time data that NoSQL offers, as well as the fast scans, analytics, and processing of HDFS. Additionally, users are demanding that big data storage systems integrate natively with their existing BI and analytic technology investments, which typically use SQL as the standard query language of choice. This demand has led big data back to a familiar friend: relationally structured data storage systems.
Todd Lipcon explores the advantages of relational storage and reviews new developments, including Google Cloud Spanner and Apache Kudu, which provide a scalable relational solution for users who have too much data for a legacy high-performance analytic system. Todd explains how to address use cases that fall between HDFS and NoSQL with technologies like Apache Kudu or Google Cloud Spanner and how the combination of relational data models, SQL query support, and native API-based access enables the next generation of big data applications. Along the way, he also covers suggested architectures, the performance characteristics of Kudu and Spanner, and the deployment flexibility each option provides.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
The document discusses HBase multi-tenancy features including RSGroup for compute resource isolation, DNGroup for storage isolation, and replication isolation. It also covers object storage solutions in HBase like MOB and YARN log storage, as well as techniques for isolating large queries. Bugs and fixes are mentioned relating to these features.
This document discusses software defined storage and evaluates whether it is a real technology or just hype. It defines software defined storage as storage software that runs on standard x86 server hardware and can be sold as software or as an appliance. The document examines different types of software defined storage like storage that runs on a single server, in a virtual machine, or across multiple hypervisor hosts in a scale-out cluster. It also compares the benefits and challenges of converged infrastructure solutions using software defined storage versus dedicated storage arrays.
Die 10 besten PostgreSQL-Replikationsstrategien für Ihr Unternehmen
Dieses Webinar hilft Ihnen, die Unterschiede zwischen den verschiedenen Replikationsansätzen zu verstehen, die Anforderungen der jeweiligen Strategie zu erkennen und sich über die Möglichkeiten klar zu werden, was mit jeder einzelnen zu erreichen ist. Damit werden Sie hoffentlich eher in der Lage sein, herauszufinden, welche PostgreSQL-Replikationsarten Sie wirklich für Ihr System benötigen.
- Wie physische und logische Replikation in PostgreSQL funktionieren
- Unterschiede zwischen synchroner und asynchroner Replikation
- Vorteile, Nachteile und Herausforderungen bei der Multi-Master-Replikation
- Welche Replikationsstrategie für unterschiedliche Use-Cases besser geeignet ist
Referent:
Borys Neselovskyi, Regional Sales Engineer DACH, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
This document discusses migrating an Oracle Database Appliance (ODA) from a bare metal to a virtualized platform. It outlines the initial situation, desired target, challenges, and solution approach. The key challenges included system downtime during the migration, backup/restore processes, using external storage, and database reorganizations. The solution involved first converting to a virtual platform and then upgrading, using backup/restore, attaching an NGENSTOR Hurricane storage appliance for direct attached storage, and moving database reorganizations to a separate maintenance window. It also discusses the odaback-API tool created to help automate and standardize the migration process.
Deploying Flash in the Data Center discusses various ways to deploy flash storage in the data center to improve performance. It describes all-flash arrays that provide the highest performance but also more expensive options like hybrid arrays that combine flash and disk. It also covers using flash in servers or as a cache to accelerate storage arrays. Choosing the best approach depends on factors like workload, budget, and existing infrastructure.
What's So Special about the Oracle Database Appliance?
A presentation most recently delivered by Simon Haslam at the UKOUG Tech14 conference, though given elsewhere in various forms including Oracle Gebruikersclub Holland and an online RAC SIG seminar.
The slides introduce the Oracle Database Apppliance (ODA) and discusses how you can use it to easily deploy both databases and WebLogic Server. Three case studies are covered and the presentation wraps up considering when the ODA might be most suitable for your organisation.
This latest Winter 2014 version includes ODA 12c updates (database and WebLogic).
What is Trove, the Database as a Service on OpenStack?
Trove was integrated into the IceHouse release of OpenStack to provision and manage databases in an OpenStack Cloud. With Trove developers can spin up a database instance on-demand in an instant.
Please sign up for upcoming OpenStack Online Meetups: http://www.meetup.com/OpenStack-Online-Meetup/
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
How to Get a Game Changing Performance Advantage with Intel SSDs and Aerospike
Frank Ober of Intel’s Solutions Group will review how he achieved 1+ million transactions per second on a single dual socket Xeon Server with SSDs using the open source tools of Aerospike for benchmarking. The presentation will include a live demo showing the performance of a sample system. We will cover:
The state of Key-value Stores on modern SSDs.
What choices you make in your selection process of hardware that will most benefit a consistent deployment of Aerospike.
How to run an Aerospike mesh on a single machine.
How to work replication of that mesh, and what values allow for maximum threading and scale.
We will also focus on some key learnings and the Total Cost of Ownership choices that will make your deployment more effective long term.
Microsoft Azure Cosmos DB is a multi-model database that supports document, key-value, wide-column and graph data models. It provides high throughput, low latency and global distribution across multiple regions. Cosmos DB supports multiple APIs including SQL, MongoDB, Cassandra and Gremlin to allow developers to use their preferred API based on their application needs and skills. It also provides automatic scaling of throughput and storage across all data partitions.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
This document provides an overview and agenda for a presentation on virtualizing SQL Server workloads on VMware vSphere. The presentation will cover designing SQL Server virtual machines for performance in production environments, consolidating multiple SQL Server workloads, and ensuring SQL Server availability using vSphere features. It emphasizes understanding the workload, optimizing for storage and network performance, avoiding swapping, using large memory pages, and accounting for NUMA when configuring SQL Server virtual machines.
Running Oracle EBS in the cloud (DOAG TECH17 edition)
This presentation is based on a real life experience migrating Oracle E-Business Suite production to AWS.
We will talk about:
- Certification basics. Overview on supported configurations.
- How to build. Recommendations based on migration and 2 year production runtime experience.
- Advanced configurations.
- R12.2.
- Microsoft Azure and Oracle Cloud review. Quick comparison outline of main alternative platforms. How ready is Oracle's own cloud service.
- Scaling.
This is a very client demanding topic. Many are looking into cloud migration options and how they can optimize the cost compared to the on-premise hosting, and many misunderstand the complexity of Oracle EBS stack being capable for cloud deployment.
Sql Start! 2020 - SQL Server Lift & Shift su Azure
Slide of the session delivered during SQL Start! 2020, where I illustrate different approaches to determine the best landing zone for you SQL Server workloads.
Video (ITA): https://youtu.be/1hqT_xHs0Qs
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...
The document provides an overview of Maaz Anjum, a solutions architect specializing in Oracle products like OEM12c, Golden Gate, and Engineered Systems. It lists his email, blog, and experience using Oracle products since 2001. It also provides details about Bias Corporation, the company he works for, including its founding date, certifications, expertise, customers, and implementations.
This document proposes a virtual heterogeneous database platform to address challenges with physical database servers like low utilization and high costs. It would provide a virtualization platform to host multiple database types and high availability solutions in virtual machines, improving efficiency through automated provisioning and management. The document discusses database server models, high availability solutions like Datakeeper and clustering, operations team concerns about flexibility and testing, and monitoring tools.
The document discusses various considerations for deploying applications and solutions using Microsoft Azure Virtual Machines (VMs). It covers VM sizing configurations including CPU, memory, storage, and I/O capabilities for different VM series. It also discusses deployment strategies like availability sets and resource groups. Other topics include networking, security, costs, limits, and best practices.
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
This document provides an overview of advanced SQL Server techniques and best practices when running SQL Server in a virtualized environment on vSphere. It covers topics such as storage configuration including VMFS, block alignment, and I/O profiling. Networking techniques like jumbo frames and guest tuning are discussed. The document also reviews memory management and optimization, CPU sizing considerations, workload consolidation strategies, and high availability options for SQL Server on vSphere.
Virtual SAN 5.5 provides a technical deep dive into VMware's Virtual SAN software-defined storage technology. Key points include:
- Virtual SAN runs on standard x86 servers and provides a policy-based management framework and high performance flash architecture.
- It delivers scale of up to 32 hosts, 3,200 VMs, 4.4 petabytes, and 2 million IOPS.
- Virtual SAN is integrated with VMware technologies like vMotion, vSphere HA, and vSphere replication and simplifies storage management.
- It offers flexible configurations, granular scaling, and reduces both capital and operating expenses for improved total cost of ownership.
Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
This document provides best practices for virtualizing mission critical applications like Exchange and SQL Server. It discusses the top 10 myths about virtualizing business critical applications and provides the truths. It then discusses best practices for virtualizing Exchange, including starting simple, licensing, storage configuration, and high availability options. For SQL Server, it covers starting simple, licensing, storage configuration, migrating, and database best practices. It also discusses tools that can be used for database performance analysis when virtualized like Confio IgniteVM and vCenter Operations.
This document provides a summary of a presentation on virtualizing tier one applications. The presentation covered the top 10 myths about virtualizing business critical applications and provided best practices for virtualizing mission critical applications. It also discussed real world tools for monitoring virtualized environments like Confio IgniteVM and vCenter Operations. The presentation aimed to show that virtualizing tier one applications is possible and discussed strategies for virtualizing SQL Server and Microsoft Exchange environments.
So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!
Jeremy Beard, a senior solutions architect at Cloudera, introduces Kudu, a new column-oriented storage system for Apache Hadoop designed for fast analytics on fast changing data. Kudu is meant to fill gaps in HDFS and HBase by providing efficient scanning, finding and writing capabilities simultaneously. It uses a relational data model with ACID transactions and integrates with common Hadoop tools like Impala, Spark and MapReduce. Kudu aims to simplify real-time analytics use cases by allowing data to be directly updated without complex ETL processes.
This document discusses managing storage across public and private resources. It covers the evolution of on-site storage management, storage options in the public cloud, and challenges of managing hybrid cloud storage. Key topics include the transition from siloed storage to software-defined storage, various cloud storage services like object storage and block storage, challenges of public cloud limitations, and solutions for connecting on-site and cloud storage like gateways, file systems, and caching appliances.
Kudu: Resolving Transactional and Analytic Trade-offs in Hadoopjdcryans
Kudu is a new column-oriented storage system for Apache Hadoop that is designed to address the gaps in transactional processing and analytics in Hadoop. It aims to provide high throughput for large scans, low latency for individual rows, and database semantics like ACID transactions. Kudu is motivated by the changing hardware landscape with faster SSDs and more memory, and aims to take advantage of these advances. It uses a distributed table design partitioned into tablets replicated across servers, with a centralized metadata service for coordination.
A brave new world in mutable big data relational storage (Strata NYC 2017)Todd Lipcon
The ever-increasing interest in running fast analytic scans on constantly updating data is stretching the capabilities of HDFS and NoSQL storage. Users want the fast online updates and serving of real-time data that NoSQL offers, as well as the fast scans, analytics, and processing of HDFS. Additionally, users are demanding that big data storage systems integrate natively with their existing BI and analytic technology investments, which typically use SQL as the standard query language of choice. This demand has led big data back to a familiar friend: relationally structured data storage systems.
Todd Lipcon explores the advantages of relational storage and reviews new developments, including Google Cloud Spanner and Apache Kudu, which provide a scalable relational solution for users who have too much data for a legacy high-performance analytic system. Todd explains how to address use cases that fall between HDFS and NoSQL with technologies like Apache Kudu or Google Cloud Spanner and how the combination of relational data models, SQL query support, and native API-based access enables the next generation of big data applications. Along the way, he also covers suggested architectures, the performance characteristics of Kudu and Spanner, and the deployment flexibility each option provides.
This presentation is for those of you who are interested in moving your on-prem SQL Server databases and servers to Azure virtual machines (VM’s) in the cloud so you can take advantage of all the benefits of being in the cloud. This is commonly referred to as a “lift and shift” as part of an Infrastructure-as-a-service (IaaS) solution. I will discuss the various Azure VM sizes and options, migration strategies, storage options, high availability (HA) and disaster recovery (DR) solutions, and best practices.
HBaseConAsia2018 Track3-6: HBase at MeituanMichael Stack
The document discusses HBase multi-tenancy features including RSGroup for compute resource isolation, DNGroup for storage isolation, and replication isolation. It also covers object storage solutions in HBase like MOB and YARN log storage, as well as techniques for isolating large queries. Bugs and fixes are mentioned relating to these features.
Software defined storage real or bs-2014Howard Marks
This document discusses software defined storage and evaluates whether it is a real technology or just hype. It defines software defined storage as storage software that runs on standard x86 server hardware and can be sold as software or as an appliance. The document examines different types of software defined storage like storage that runs on a single server, in a virtual machine, or across multiple hypervisor hosts in a scale-out cluster. It also compares the benefits and challenges of converged infrastructure solutions using software defined storage versus dedicated storage arrays.
Die 10 besten PostgreSQL-Replikationsstrategien für Ihr UnternehmenEDB
Dieses Webinar hilft Ihnen, die Unterschiede zwischen den verschiedenen Replikationsansätzen zu verstehen, die Anforderungen der jeweiligen Strategie zu erkennen und sich über die Möglichkeiten klar zu werden, was mit jeder einzelnen zu erreichen ist. Damit werden Sie hoffentlich eher in der Lage sein, herauszufinden, welche PostgreSQL-Replikationsarten Sie wirklich für Ihr System benötigen.
- Wie physische und logische Replikation in PostgreSQL funktionieren
- Unterschiede zwischen synchroner und asynchroner Replikation
- Vorteile, Nachteile und Herausforderungen bei der Multi-Master-Replikation
- Welche Replikationsstrategie für unterschiedliche Use-Cases besser geeignet ist
Referent:
Borys Neselovskyi, Regional Sales Engineer DACH, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
This document discusses migrating an Oracle Database Appliance (ODA) from a bare metal to a virtualized platform. It outlines the initial situation, desired target, challenges, and solution approach. The key challenges included system downtime during the migration, backup/restore processes, using external storage, and database reorganizations. The solution involved first converting to a virtual platform and then upgrading, using backup/restore, attaching an NGENSTOR Hurricane storage appliance for direct attached storage, and moving database reorganizations to a separate maintenance window. It also discusses the odaback-API tool created to help automate and standardize the migration process.
2015 deploying flash in the data centerHoward Marks
Deploying Flash in the Data Center discusses various ways to deploy flash storage in the data center to improve performance. It describes all-flash arrays that provide the highest performance but also more expensive options like hybrid arrays that combine flash and disk. It also covers using flash in servers or as a cache to accelerate storage arrays. Choosing the best approach depends on factors like workload, budget, and existing infrastructure.
What's So Special about the Oracle Database Appliance?O-box
A presentation most recently delivered by Simon Haslam at the UKOUG Tech14 conference, though given elsewhere in various forms including Oracle Gebruikersclub Holland and an online RAC SIG seminar.
The slides introduce the Oracle Database Apppliance (ODA) and discusses how you can use it to easily deploy both databases and WebLogic Server. Three case studies are covered and the presentation wraps up considering when the ODA might be most suitable for your organisation.
This latest Winter 2014 version includes ODA 12c updates (database and WebLogic).
What is Trove, the Database as a Service on OpenStack?OpenStack_Online
Trove was integrated into the IceHouse release of OpenStack to provision and manage databases in an OpenStack Cloud. With Trove developers can spin up a database instance on-demand in an instant.
Please sign up for upcoming OpenStack Online Meetups: http://www.meetup.com/OpenStack-Online-Meetup/
Cloud Migration Paths: Kubernetes, IaaS, or DBaaSEDB
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
How to Get a Game Changing Performance Advantage with Intel SSDs and AerospikeAerospike, Inc.
Frank Ober of Intel’s Solutions Group will review how he achieved 1+ million transactions per second on a single dual socket Xeon Server with SSDs using the open source tools of Aerospike for benchmarking. The presentation will include a live demo showing the performance of a sample system. We will cover:
The state of Key-value Stores on modern SSDs.
What choices you make in your selection process of hardware that will most benefit a consistent deployment of Aerospike.
How to run an Aerospike mesh on a single machine.
How to work replication of that mesh, and what values allow for maximum threading and scale.
We will also focus on some key learnings and the Total Cost of Ownership choices that will make your deployment more effective long term.
Microsoft Azure Cosmos DB is a multi-model database that supports document, key-value, wide-column and graph data models. It provides high throughput, low latency and global distribution across multiple regions. Cosmos DB supports multiple APIs including SQL, MongoDB, Cassandra and Gremlin to allow developers to use their preferred API based on their application needs and skills. It also provides automatic scaling of throughput and storage across all data partitions.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
This document provides guidance and best practices for using Infrastructure as a Service (IaaS) on Microsoft Azure for database workloads. It discusses key differences between IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). The document also covers Azure-specific concepts like virtual machine series, availability zones, storage accounts, and redundancy options to help architects design cloud infrastructures that meet business requirements. Specialized configurations like constrained VMs and ultra disks are also presented along with strategies for ensuring high performance and availability of database workloads on Azure IaaS.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...VMworld
This document provides an overview and agenda for a presentation on virtualizing SQL Server workloads on VMware vSphere. The presentation will cover designing SQL Server virtual machines for performance in production environments, consolidating multiple SQL Server workloads, and ensuring SQL Server availability using vSphere features. It emphasizes understanding the workload, optimizing for storage and network performance, avoiding swapping, using large memory pages, and accounting for NUMA when configuring SQL Server virtual machines.
This presentation is based on a real life experience migrating Oracle E-Business Suite production to AWS.
We will talk about:
- Certification basics. Overview on supported configurations.
- How to build. Recommendations based on migration and 2 year production runtime experience.
- Advanced configurations.
- R12.2.
- Microsoft Azure and Oracle Cloud review. Quick comparison outline of main alternative platforms. How ready is Oracle's own cloud service.
- Scaling.
This is a very client demanding topic. Many are looking into cloud migration options and how they can optimize the cost compared to the on-premise hosting, and many misunderstand the complexity of Oracle EBS stack being capable for cloud deployment.
Sql Start! 2020 - SQL Server Lift & Shift su AzureMarco Obinu
Slide of the session delivered during SQL Start! 2020, where I illustrate different approaches to determine the best landing zone for you SQL Server workloads.
Video (ITA): https://youtu.be/1hqT_xHs0Qs
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum
The document provides an overview of Maaz Anjum, a solutions architect specializing in Oracle products like OEM12c, Golden Gate, and Engineered Systems. It lists his email, blog, and experience using Oracle products since 2001. It also provides details about Bias Corporation, the company he works for, including its founding date, certifications, expertise, customers, and implementations.
This document proposes a virtual heterogeneous database platform to address challenges with physical database servers like low utilization and high costs. It would provide a virtualization platform to host multiple database types and high availability solutions in virtual machines, improving efficiency through automated provisioning and management. The document discusses database server models, high availability solutions like Datakeeper and clustering, operations team concerns about flexibility and testing, and monitoring tools.
The document discusses various considerations for deploying applications and solutions using Microsoft Azure Virtual Machines (VMs). It covers VM sizing configurations including CPU, memory, storage, and I/O capabilities for different VM series. It also discusses deployment strategies like availability sets and resource groups. Other topics include networking, security, costs, limits, and best practices.
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
This document provides an overview of advanced SQL Server techniques and best practices when running SQL Server in a virtualized environment on vSphere. It covers topics such as storage configuration including VMFS, block alignment, and I/O profiling. Networking techniques like jumbo frames and guest tuning are discussed. The document also reviews memory management and optimization, CPU sizing considerations, workload consolidation strategies, and high availability options for SQL Server on vSphere.
Virtual SAN 5.5 provides a technical deep dive into VMware's Virtual SAN software-defined storage technology. Key points include:
- Virtual SAN runs on standard x86 servers and provides a policy-based management framework and high performance flash architecture.
- It delivers scale of up to 32 hosts, 3,200 VMs, 4.4 petabytes, and 2 million IOPS.
- Virtual SAN is integrated with VMware technologies like vMotion, vSphere HA, and vSphere replication and simplifies storage management.
- It offers flexible configurations, granular scaling, and reduces both capital and operating expenses for improved total cost of ownership.
VMworld 2015: Advanced SQL Server on vSphereVMworld
Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
Virtualizing Tier One Applications - VarrowAndrew Miller
This document provides best practices for virtualizing mission critical applications like Exchange and SQL Server. It discusses the top 10 myths about virtualizing business critical applications and provides the truths. It then discusses best practices for virtualizing Exchange, including starting simple, licensing, storage configuration, and high availability options. For SQL Server, it covers starting simple, licensing, storage configuration, migrating, and database best practices. It also discusses tools that can be used for database performance analysis when virtualized like Confio IgniteVM and vCenter Operations.
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...Andrew Miller
This document provides a summary of a presentation on virtualizing tier one applications. The presentation covered the top 10 myths about virtualizing business critical applications and provided best practices for virtualizing mission critical applications. It also discussed real world tools for monitoring virtualized environments like Confio IgniteVM and vCenter Operations. The presentation aimed to show that virtualizing tier one applications is possible and discussed strategies for virtualizing SQL Server and Microsoft Exchange environments.
Should I move my database to the cloud?James Serra
So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!
Azure VM 101 - HomeGen by CloudGen Verona - Marco ObinuMarco Obinu
Slides presented during HomeGen by CloudGen Verona, about how to properly size an Azure IaaS VM, with an additional focus on high availability and cost-saving topics.
Session recording: https://youtu.be/C8v6c6EkJ9A
Demo: https://github.com/OmegaMadLab/SqlIaasVmPlayground
This document discusses handling massive writes for online transaction processing (OLTP) systems. It begins with an introduction and overview of the topics to be covered, including terminology, differences between massive reads versus writes, and potential solutions using relational databases, NoSQL databases, and code optimizations. Specific solutions discussed for massive writes include using memory, fast disks, caching, column-oriented databases, SQL tuning, database partitioning, reading from slaves, and sharding or splitting data across multiple databases. The document provides pros and cons of each approach and examples of performance improvements observed.
How to deploy SQL Server on an Microsoft Azure virtual machinesSolarWinds
Running apps on Microsoft Azure Virtual Machines is tempting; promising faster deployments and lower overall TCO. But how easy is it really to configure and run SQL Server in an Azure VM environment? Learn what you should know about tuning, optimizing, and key indicators for monitoring performance, as well as special considerations for High-Availability and Disaster Recovery.
The document provides information about Azure disk storage options including:
- Azure now offers a Cold Tier storage option for infrequently accessed data with long-term retention needs.
- An upcoming Azure Storage Mover public preview will support migrating files and folders to Azure Storage from SMB and Azure Files sources.
- The Azure Hour schedule includes upcoming sessions on Azure Data Disks, EPIC on Azure, and Oracle on Azure.
- Standard, Premium, Ultra disks are optimized for different workloads based on performance needs including IOPS, throughput, and latency. Choosing the right disk type depends on workload requirements.
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This document discusses migrating high IO SQL Server workloads to Azure. It begins by explaining that every company has at least one "whale" workload that requires high CPU, memory and IO. These whales can be challenging to move to the cloud. The document then provides tips on determining if a workload's issue is truly high IO or caused by another factor. It discusses various wait events that may indicate IO problems and tools for monitoring IO performance. Finally, it covers some considerations for IO in the cloud.
This document provides an overview of options for running Oracle solutions on Microsoft Azure infrastructure as a service (IaaS). It discusses architectural considerations for high availability, disaster recovery, storage, licensing, and migrating workloads from Oracle Exadata. Key points covered include using Oracle Data Guard for replication and failover, storage options like Azure NetApp Files that can support Exadata workloads, and identifying databases that are not dependent on Exadata features for lift and shift to Azure IaaS. The document aims to help customers understand how to optimize their use of Oracle solutions when deploying to Azure.
This document discusses strategies for managing ADHD as an adult. It begins by describing the three main types of ADHD - inattentive, hyperactive-impulsive, and combined. It then lists some of the biggest challenges of ADHD like executive dysfunction, disorganization, lack of attention, procrastination, and internal preoccupation. The document provides tips and strategies for overcoming each challenge through organization, scheduling, list-making, breaking large tasks into small ones, and using technology tools. It emphasizes finding accommodations that work for the individual and their specific ADHD presentation and challenges.
Kellyn Gorman shares her experience living with ADHD and strategies for turning it into a positive. She discusses how ADHD impacted her childhood and how it still presents challenges as an adult. However, with the right tools and understanding of her needs, she is able to find success. She provides tips for organizing, prioritizing tasks, managing distractions, and accessing support. The key is learning about ADHD and how to structure one's environment and routine to play to one's strengths rather than fighting against the condition.
Migrating Oracle workloads to Azure requires understanding the workload and hardware requirements. It is important to analyze the workload using the Automatic Workload Repository (AWR) report to accurately size infrastructure needs. The right virtual machine series and storage options must be selected to meet the identified input/output and capacity needs. Rather than moving existing hardware, the focus should be migrating the Oracle workload to take advantage of cloud capabilities while ensuring performance and high availability.
This document discusses overcoming silos when implementing DevOps for a new product at a company. The teams involved were dispersed globally and siloed in their tools and processes. Challenges included isolating workload sizes, choosing a Linux image, and team ownership issues. The solution involved aligning teams, automating deployment with Bash scripts called by Terraform and Azure DevOps, and evolving the automation. This improved communication, decreased teams from 120 people to 7, and increased deployments and profits for the successful project.
This document provides an overview of how to successfully migrate Oracle workloads to Microsoft Azure. It begins with an introduction of the presenter and their experience. It then discusses why customers might want to migrate to the cloud and the different Azure database options available. The bulk of the document outlines the key steps in planning and executing an Oracle workload migration to Azure, including sizing, deployment, monitoring, backup strategies, and ensuring high availability. It emphasizes adapting architectures for the cloud rather than directly porting on-premises systems. The document concludes with recommendations around automation, education resources, and references for Oracle-Azure configurations.
This document discusses the future of data and the Azure data ecosystem. It highlights that by 2025 there will be 175 zettabytes of data in the world and the average person will have over 5,000 digital interactions per day. It promotes Azure services like Power BI, Azure Synapse Analytics, Azure Data Factory and Azure Machine Learning for extracting value from data through analytics, visualization and machine learning. The document provides overviews of key Azure data and analytics services and how they fit together in an end-to-end data platform for business intelligence, artificial intelligence and continuous intelligence applications.
This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
This document discusses techniques for optimizing Power BI performance. It recommends tracing queries using DAX Studio to identify slow queries and refresh times. Tracing tools like SQL Profiler and log files can provide insights into issues occurring in the data sources, Power BI layer, and across the network. Focusing on optimization by addressing wait times through a scientific process can help resolve long-term performance problems.
The document provides tips and tricks for scripting success on Linux. It begins with introducing the speaker and emphasizing that the session will focus on best practices for those already familiar with BASH scripting. It then details various tips across multiple areas: setting the shell and environment variables, adding headers and comments to scripts, validating input, implementing error handling and debugging, leveraging utilities like CRON for scheduling, and ensuring scripts continue running across sessions. The tips are meant to help authors write more readable, maintainable, and reliable scripts.
Mentors provide guidance and support, while sponsors use their influence to advocate for and promote a protege's career. Obtaining both mentors and sponsors is important for advancing in one's field and overcoming biases, yet women often have fewer sponsors than men. The document outlines strategies for how women can find and work with sponsors, and how men can act as allies in supporting women. Developing representation of women in technology fields through mentorship and sponsorship can help initiatives become self-sustaining over time.
Kellyn Pot'Vin-Gorman presented on GDPR compliance. Some key points include:
- GDPR went into effect in May 2018 and covers any data belonging to an EU citizen.
- Fines for non-compliance can be up to 4% of annual revenue or €20 million.
- DBAs play a role in identifying critical data, auditing processes, and reporting on compliance.
- An AI tool assessed the privacy policies of 14 major companies and found they all failed to meet GDPR requirements.
- Achieving compliance requires security frameworks, data mapping, encryption, access controls, and dedicated teams.
This document provides tips for optimizing performance in Power BI by focusing on different areas like data sources, the data model, visuals, dashboards, and using trace and log files. Some key recommendations include filtering data early, keeping the data model and queries simple, limiting visual complexity, monitoring resource usage, and leveraging log files to identify specific waits and bottlenecks. An overall approach of focusing on time-based optimization by identifying and addressing the areas contributing most to latency is advocated.
Kellyn Pot’Vin-Gorman discusses DevOps tools for winning agility. She emphasizes that while many organizations automate testing, the DevOps journey is longer and involves additional steps like orchestration between environments, security, collaboration, and establishing a culture of continuous improvement. She also stresses that organizations should not forget about managing their data as part of the DevOps process and advocates for approaches like database virtualization to help enhance DevOps initiatives.
The document discusses various Linux system monitoring utilities including SAR, SADC/SADF, MPSTAT, VMSTAT, and TOP. SAR provides CPU, memory, I/O, network, and other system activity reports. SADC collects system data which SADF can then format and output. MPSTAT reports processor-level statistics. VMSTAT provides virtual memory statistics. TOP displays active tasks and system resources usage.
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
Support en anglais diffusé lors de l'événement 100% IA organisé dans les locaux parisiens d'Iguane Solutions, le mardi 2 juillet 2024 :
- Présentation de notre plateforme IA plug and play : ses fonctionnalités avancées, telles que son interface utilisateur intuitive, son copilot puissant et des outils de monitoring performants.
- REX client : Cyril Janssens, CTO d’ easybourse, partage son expérience d’utilisation de notre plateforme IA plug & play.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
6. Benefits of IaaS
QUICKER MIGRATION TIMES THAN OTHER CLOUD
OFFERINGS
ABILITY TO KEEP SIMILAR ARCHITECTURE
INTRODUCE CLOUD SERVICES AND FEATURES
REMOVE THE DATACENTER
7. Insanity Is Doing the Same Thing
Over and Over Again and
Expecting Different Results
~Einstein
*Also Infrastructure folks who continually try to lift and
shift the infrastructure for database workloads…
8. Migrate the Workload, not the
Hardware
Servers may not have been sized appropriately for the workload.
Workload of database may have changed over time.
May cost you more in licensing than what the workload actually
requires.
For different databases, there are
different tools to assist:
SQL Server: DMVs, PerfMon, Scripting, (Randal, Klee, etc) Redgate
SQL Monitor
Oracle: AWR, OEM, ASH, SASH, Statspack, Tracing
MySQL: Solarwinds DPA, Instrumental, Panopta
9. Architect for the Cloud
Deploy all tiers to the cloud
Avoid ingress or egress charges
Reduce latency
Remove complexity and centrally locate to
the cloud
Refactor processes that utilize
large percentages of resources
and network. In the cloud, this
has an impactful cost.
A lift and shift does not equal
taking what you have on-prem
and duplicating it. Success
means you take the database
and lift and shift it with the
support of cloud services.
11. https://azure.microsoft.com/en-
us/pricing/details/virtual-machines/series/
Understand
IaaS VM
Series
• A and B-series commonly won’t work for
databases.
• D-series can work for some, but consider matching
series to production vms, but lesser resources
• L and H-series are outliers for database workloads.
• Identify workload needs
• D-series is for general use
• E-series and M-series are the most common VMs in the
database industry
• E-series for average production databases
• M-series, but verify IO storage/network limits!
13. https://docs.microsoft.com/en-us/azure/virtual-
machines/windows/constrained-vcpu
When one VM
is too Much-
Constrained
VMs
• Allows for isolation of vCPU to application
licensing for database and app workloads
• Matched in existing series VMs in the Azure
Pricing Calculator
• Share storage between databases or apps
• Before choosing, ensure your product licensing
support constrained vCPU VMs
• Carefully match workloads on IO and memory,
not just vCPU usage when combining.
14. Specialized
Constrained
vCPU VMs
Name vCPU Specs
Standard_M8-2ms 2 Same as M8ms
Standard_M8-4ms 4 Same as M8ms
Standard_M16-4ms 4 Same as M16ms
Standard_M16-8ms 8 Same as M16ms
Standard_M32-8ms 8 Same as M32ms
Standard_M32-16ms 16 Same as M32ms
Standard_M64-32ms 32 Same as M64ms
Standard_M64-16ms 16 Same as M64ms
Standard_M128-64ms 64 Same as M128ms
Standard_M128-32ms 32 Same as M128ms
Standard_E4-2s_v3 2 Same as E4s_v3
Standard_E8-4s_v3 4 Same as E8s_v3
Standard_E8-2s_v3 2 Same as E8s_v3
Standard_E16-8s_v3 8 Same as E16s_v3
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/constrained-vcpu
16. https://www.oracle.com/database/technologies/high-availability/maa.html
Architect for
the Cloud
• Maximum Availability Architecture
• Different names for different vendors.
• Get a clear understanding of the SLA uptime for the business
and environment.
• Onprem datacenters are not the same as cloud architecture.
• Pivot products and services to cover what you need.
• High Availability
• Identify what HA means to stakeholders.
• Often, it’s specific features, not a product, then marry these to
a cloud product which:
• Matches the IaaS architecture
• Doesn’t introduce overhead
• Has vendor support
• Identify what cloud services may duplicate or simulate
the same feature if unavailable.
17. Azure Location Concepts
Concept Description
Region Multiple datacenters within a specific perimeter and connected
through a low-latency network
Geography A specific location area. The area may have more than one Azure
region
Availability Zone Physical regions located within a region. Each zone has one or more
datacenters equipped with independent power, cooling and
network.
Geo-Region Current region recommended with the appropriate services and
redundancy for the database and other workloads.
Secondary Region Utilized to spread a workload for HA and/or recovery
20. Use Availability Zones
• High Availability, (HA) offering to
protect data and apps from
datacenter failures.
• Contain multiple locations
within a single Azure region.
• Not all products or services are
available for AZ or in every
region.
• No additional cost to deploy
VMs in an Availability Zone.
https://docs.microsoft.com/en-us/azure/availability-zones/az-overview
22. Disaster
Recovery
• Along with AZ/AG,
etc.
• Use DR products
that best support
cloud
• Always-on
Availability Groups
and Oracle
DataGuard
• Implement
advanced,
automation features
to remove manual
intervention
• Clearly identify RPO,
(Recovery Point
Objective) and RTO,
(Recovery Time
Objective) for your
business.
• Ensure that the HR,
DR, backup and
recovery decisions
meet these and
have been fully
TESTED.
24. Storage is
SEPARATE
and
Important
• Ensure you know the IO workload for your
database going to the cloud
• Understand both the MB/s and the IO
throughput for the database.
• Oracle has demonstrated, on average,
much higher demands for IO than MSSQL,
MySQL or PostgreSQL.
• Storage is separate to ensure the right
combination in IaaS can be reached.
25. Storage
Considerations
What is the storage to
be used for?
Data- OLTP,
DSS, OLAP, Big
Data?
Logging
Backup
Ensure that backups and data
refresh requirements are calculated
into the IO demands for the
database.
28. Ultra Disk
Ultra Disk Offerings
Disk Size
(GiB)
4 8 16 32 64 128 256 512
1,024-
65,536 (in
increment
s of 1 TiB)
IOPS
Range
1,200 2,400 4,800 9,600 19,200 38,400 76,800 160,000 160,000
Throughpu
t Range
(MB/s)
300 600 1,200 2,000 2,000 2,000 2,000 2,000 2,000
29. Ultradisks
• Often the first recommendation by Infra
• Be aware of the limitations before
recommending for database workloads:
• Oracle 12.2 later is supported
• Only supports un-cached reads and un-cached writes
• Doesn't support disk snapshots, VM images, availability
sets, Azure Dedicated Hosts, or Azure disk encryption
• No integration with Azure Backup or Azure Site Recovery
• Offers up to 16 TiB per region per subscription
unless upped via support.
• Isn’t available in all regions.
Capacity
per disk
(GiB)
IOPS per
disk
Throughput
per disk
(MB/s)
Mininum 4 100 1
Maximum 65536 160000 2000
https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-ultra-ssd#ga-scope-and-limitations
GiB * .05, MBPs * 1.01, IOPs * .12, vCPU * 4.83
30. Types of cache
Settings
• Available to Premium Storage
• A Multi-tier caching technology, aka BlobCache
• OS Disk- ReadWrite is fine, which is the default,
but not for datafiles.
• ReadOnly Cache is, as it caches reads, while
letting writes pass through to disk.
• Limit of 4095Gib on per individual premium disk
• Results in any disk above a P40 for entirety
will silently disable read caching.
• Larger disks are preferably used without
caching, otherwise additional space is
wasted. P50, just allocate 4095 of the 4096
size.
• Use smaller disks and choose to stripe and
mirror.
• M-series available and VM series dependent.
31. IO Throttling
• Why it happens?
• No, you can’t have all the
resources for yourself.
• What all can be involved?
• It’s not just the database.
• How to identify it?
• What do to when it is
identified?
https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory?toc=/azure/virtual-
machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json
32. Bring in
Additional
Solutions
• High IOPS-
• MBPs: Azure NetApp
Files
• Higher IO
throughput:
Consider Silk,
Flashgrid Storage,
Pure Storage or
Excelero.
• Consider disk
striping of smaller
disks and parallel
processing at the
database level.
• Backups, batch loading
and other challenges:
• Offload backups
with secondary
backup solutions.
• Refactor batch
processing with
other services,
(Azure Data Factory,
Azure Analysis
Services, Databricks,
etc.)
33. Azure NetApp
Files
• Fully Managed, PaaS,
Microsoft Azure Storage
Service
• All Flash Baremetal Storage
• Only dependent on Nic, not
VM.
• *Available in Standard,
Premium, (common) and
Ultra, (optimal)
• ANF is native to Azure
Azure
Files
Premium
Files
Azure NetApp
Files
Premium
Disk
Performance 1K IOPs 100K IOPs 320K IOPs 20K IOPs
Capacity Pool 5TB 100TB 500TB 32TB
AD Integration Azure AD N/A Bring Your Own
AD
/ Azure AD
N/A
Protocol SMB SMB NFS & SMB Disk
Data Protection LRS Only Snapshots
Back Up Tools
Snapshots
*Be aware of pricing with scaling to meet IO
FAQs About Azure NetApp Files | Microsoft Docs
36. When To Go
Old-School
• Depending on the combination of storage, striping
and RAID, performance can vary greatly.
• Verify that disk is striped correctly, (log creation
commands and document.)
• Consider smaller disk size and stripe vs. larger,
single drive to offer better performance.
• In Linux, consider huge pages and use LVM,
(Linux Volume Manager) or Oracle ASM,
(Automatic Storage Management) to provide
advanced features for diskgroup layout.
• Keep an eye on disk sector size, (there’s a bug
requiring 512 byte sector size in Oracle 12.1)
37. Failure Due to
Backups
• Modernize the way the database is backed up and
restore if RMAN is 40% of total IO in AWR or
database has small window to backup.
• Archaic backup and data refresh strategies can
impact a cloud environment heavily in IO and
network latency
• Snapshot technology with database consistency
should be your FIRST choice in backup solutions for
large databases.
• Oracle AWR can demonstrate the impact on the
overall database workload of RMAN and
datapump jobs.
• The Profiler can identify the workload impact in
SQL Server.
39. Simplify the
Shift to the
Cloud
• Migrate your tools that you already use to
monitor and manage the database on-prem into
the cloud whenever possible.
• For Oracle, we implement Oracle Enterprise
Manager, (Cloud Control) to ensure the
cloud environment looks just like their
onprem one.
• Redgate SQL Monitor, Solarwinds SQL
Sentry, Dynatrace, Idera Uptime
Infrastructure Monitor, etc.
• Use features to automate OS patching using
Azure Linux/Windows automated patching
service.
• Incorporate DevOps automation to the cloud
changes FIRST.
40. It’s Not Just
Infrastructure
• No matter if during the migration or when there are
issues:
• Infrastructure support will be the first line of
defense.
• Database workload will be an afterthought.
• Data support may be a request only option.
• First inclination is to “throw iron” at the problem.
• Demand to look at the code, database design,
etc.
• If you fix the real cause, you fix it once vs.
revisiting it over and over.
• Do have support take advantage of advanced
Azure tools to help identify where the problem
is, (IO, memory, CPU)
41. Manage with
What You Know
• Use the cloud services of what you already use on-
prem.
• If you can deploy your existing, on-prem tool on a
VM, consider doing this, (Oracle Enterprise Manager,
Redgate, Idera, Solarwinds, etc.- and its cloud ready,
do it!)
• Keep backup, replication tools as often as you are
able- don’t create larger learning curves than what is
required.
42. Simulate PaaS in
IaaS
• Use Azure Managed Instance for SQL Server
• Use Lifecycle Management Pack with Oracle
Enterprise Manager to automate
monitoring, management and database
patching.
• Use Linux Automated Patching, (preview) to
automate OS patching of VMs.
• Introduce Azure services to simplify the
current products used onprem
• Automate using DevOps, including
deployment builds with Terraform, Ansible,
etc.
43. Review: Database Workloads on IaaS
Know
Know the
infrastructure
Know
Migrate the
workload, not the
onprem hardware.
Know
Know what is the
cause of the
problem- don’t
guess.
Bring in
Bring in existing
tools that are cloud
enabled
Know
Know what tools are
available in the
cloud and when
stuck, bring in Azure
support.
44. References
SQL Server Performance Guidelines on Azure: Checklist: Best practices & guidelines - SQL Server on
Azure VM | Microsoft Docs
Oracle on Azure: Oracle solutions on Microsoft Azure - Azure Virtual Machines | Microsoft Docs
Understanding AZ and AS: Availability options for Azure Virtual Machines - Azure Virtual Machines |
Microsoft Docs
Virtual Machine and Disk Performance: Virtual machine and disk performance - Azure Virtual
Machines | Microsoft Docs
Azure Premium Storage: Azure Premium Storage: Design for high performance - Azure Virtual
Machines | Microsoft Docs
Azure Network Performance for IaaS: Optimize VM network throughput | Microsoft Docs
Infrastructure Automation: Use infrastructure automation tools - Azure Virtual Machines |
Microsoft Docs
Ultradisks for Azure Linux VMs:
• https://docs.microsoft.com/en-us/azure/virtual-machines/linux/disks-enable-ultra-ssd
P10 is my favored OS Disk- try to always use Premium SSD, available in the VM series with the designation of “S” in the name.
30-P50 is the most common for datafiles and we turn on readonly Host caching to achieve what we need. The P50 is over the limit of 4095, so just don’t allocate the last 1g and capture a huge performance benefit!
Azure Premium Storage have a multi-tier caching technology called BlobCache, which uses a combination of the host vRAM and local SSD for caching I/O. By default, this cache setting is set to Read/Write for OS disks, which is the disk on which the Linux OS resides, and ReadOnly for data disks, which are the disks on which Oracle database files might reside.
As the name suggests, ReadWrite caches both read I/O and write I/O from the VM, and because writes are not persisted directly to storage, this is unsuitable for database applications. Also as the name suggests, ReadOnly caches only read I/O, allowing write I/O to write-through directly to storage, which is appropriate for databases.
No one can have it all. One of the benefits of the cloud is also one of the challenges- how to give everyone a share. Throttling occurshttps://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json