In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application and how to get started.
Amazon Aurora is a MySQL-compatible relational database engine with the speed, reliability, and availability of high-end commercial databases at one-tenth the cost. This session introduces you to Amazon Aurora, explores the capabilities and features of Aurora, explains common use cases, and helps you get started with Aurora. Debanjan Saha, general manager for Aurora, explains how Aurora differs from other commonly available databases while staying compatible with MySQL and providing a high-end, cost-effective alternative to commercial and open-source database engines. In addition, Linda Xu, data architect at Ticketmaster, walks you through Ticketmaster's journey to Amazon Aurora, starting with evaluation through production migration of a critical Ticketmaster database to Amazon Aurora. Ticketmaster is one of the world's top 10 e-commerce companies and the global market leader in ticketing. In this session, Linda discusses how Aurora lets Ticketmaster provide better services to their fans, customers, and clients, and helps reduce the cost and operational burden while giving greater flexibility to support heavy traffic spikes.
This document provides an overview of Amazon Web Services storage and content delivery services, including Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and Amazon CloudFront. It describes the core capabilities and use cases for each service. The key points are: S3 provides scalable object storage and retrieval online. It has unlimited storage capacity and high durability. EBS offers persistent block level storage volumes for EC2 instances with consistent performance. CloudFront is a content delivery network (CDN) that caches and delivers content globally for websites and applications.
EidosMedia, a global media company, wanted to establish business continuity with highly available storage and the flexibility of the cloud, while maintaining the enterprise storage management capabilities of its on-premises infrastructure. NetApp ONTAP data management software enables EidosMedia to manage its on-premises and cloud data from a single, centralized management console, while leveraging AWS for the flexibility of the cloud. Join our upcoming webinar to learn how NetApp and AWS provided EidosMedia with a seamless platform to support the organization’s critical business applications, enhance workload portability, and accelerate feature development and testing through improved DevOps processes.
This document provides an overview of microservices and Amazon ECS. It discusses what microservices are, the challenges of implementing microservices, and how Amazon ECS addresses these challenges. Specifically, it covers how ECS provides scalable and automated scheduling of containers across a cluster, integration with other AWS services for areas like load balancing, deployment automation through services, and built-in monitoring with CloudWatch. Examples are given of how a company called Wrapp transitioned their microservices architecture to use ECS and the benefits they realized around management of their containerized applications.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates. Presented by: Guy Kfir, Senior Account Manager, Amazon Web Services Customer Guest: David Costa, CTO, Fredhopper
- Learn about various AWS storage tiers with respect to cost, performance, throughput and durability for large-scale distributed processing workloads. - Learn about various AWS storage tiers with respect to unique media workloads such as transcoding, QC, VFX/Animation rendering. Learn about using AWS storage services for hybrid workloads for both content repositories in the cloud and processing on-premises or vice versa. - Learn about AWS storage options and how to migrate legacy media applications running on the cloud to re-engineered applications. - Learn about shared filesystem options on AWS including Amazon EFS and how to build your own using partner products on Amazon EC2 and Amazon EBS. Media companies, driven by higher resolution and an increasing amount of content due to direct B2C delivery, are looking to cost effectively leverage cloud compute scalability. Emerging use cases, such as Media Supply Chains, VFX/Animation rendering, and transcoding for OTT streaming, require careful planning when being deployed to the cloud. Storage is an important component critical to the performance and processing of media. Amazon Web Services provides a variety of highly available, cost effective storage solutions that can deliver the right performance for the underlying application. This technical session will discuss various cloud storage strategies for different content processing workloads. We will take a deep dive at Media Supply Chains (including content transcoding, QC, mastering and packaging), post production tasks in the cloud, and other Media & Entertainment workloads.
The document discusses building a cloud-based video platform using microservices architecture. It outlines challenges in content storage, processing and delivery given changing consumer behaviors and business needs. The proposed solution uses a serverless approach with AWS services like S3, Lambda and API Gateway to build independent, interoperable services for storage, processing, delivery and analytics. This allows for rapid innovation, avoiding lock-in and reusing data across services.
Learn how Amazon Redshift, our fully managed, petabyte-scale data warehouse, can help you quickly and cost-effectively analyze all of your data using your existing business intelligence tools. Get an introduction to how Amazon Redshift uses massively parallel processing, scale-out architecture, and columnar direct-attached storage to minimize I/O time and maximize performance. Learn how you can gain deeper business insights and save money and time by migrating to Amazon Redshift. Take away strategies for migrating from on-premises data warehousing solutions, tuning schema and queries, and utilizing third party solutions.
This sessions is for architects and storage admins seeking simple and non-disruptive ways to adopt cloud platforms in their organizations. You will learn how to deliver lower costs and greater scale with nearly seamless integration into your existing B&R processes. Services mentioned: S3, Glacier, Snowball, 3rd party partners, storage gateway, and ingestion services.
This document provides an overview of Amazon S3 and object storage solutions on AWS. It discusses how S3 is used by companies like Netflix, SoundCloud and Airbnb to store large amounts of data. It also summarizes the different storage classes (Standard, Infrequent Access, Glacier), options for data transfer like Snowball, and use cases like website hosting, backup/disaster recovery, and analytics. Architectural patterns with events and Lambda are presented for building scalable, serverless applications with S3.
This session is for IT pros working with compliance managers to deliver solutions that lower costs and still meet compliance demands. You will learn how to move large scale data stores to the cloud, while remaining compliant with existing regulations. Services mentioned: S3, Glacier and the Vault Lock feature, Snowball, ingestion services.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
AWS and its partners offer a wide range of tools and features to help you to meet your security objectives. These tools mirror the familiar controls you deploy within your on-premises environments. AWS provides security-specific tools and features across network security, configuration management, access control and data security. In addition, AWS provides monitoring and logging tools to can provide full visibility into what is happening in your environment. In this session, you will get introduced to the range of security tools and features that AWS offers, and the latest security innovations coming from AWS.
This session introduces AWS services that you can leverage to build a scaleable web application architecture on AWS to handle large-scale flows.
Learn how you can migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases using AWS Database Migration Service. We'll discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We'll also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents. Best of all, we'll spend most of the time demonstrating the product and showing use cases designed to help your business.
1) Getting Started with AWS Security provides an overview of AWS security best practices including understanding AWS shared responsibility model, building strong compliance foundations, integrating identity and access management, enabling detective controls, establishing network security, implementing data protection, optimizing change management, and automating security functions. 2) Statoil migrated applications and infrastructure to AWS to achieve a cloud-first strategy. They established security automation, self-service provisioning, and continuous monitoring using native AWS services to securely manage their AWS environment. 3) Evolving security architecture practices involves treating security as part of the development process through automation, embedding architecture into code repositories, and ensuring solutions provide continuous audit and compliance.
Learning Objectives: - Understand the use cases for migrating or replicating databases to the cloud - Learn about the benefits of cloud-native databases for performance and costs reduction - See how AWS Database Migration Service helps with your migration and how AWS Schema Conversion Tool makes conversions simple and quick Moving or replicating your databases to the cloud should be simple and inexpensive. AWS has recently enhanced the AWS Database Migration Service and the AWS Schema Conversion Tool with new data sources to increase your migration options. You can now export from MongoDB databases and Greenplum, IBM Netezza, HPE Vertica, Teradata, Oracle DW and Microsoft SQL Server data warehouses to AWS. Learn how to export and migrate your data and procedural code with minimal downtime to the cloud database of your choice, including cloud-native offerings such as Amazon Aurora, Amazon DynamoDB and Amazon Redshift.
This document discusses purpose-built databases and managed database services on AWS. It begins by explaining how data needs are rapidly expanding and changing due to factors like microservices and analytics. It then introduces several purpose-built AWS databases like Amazon Aurora, DynamoDB, DocumentDB, ElastiCache, and Neptune that are optimized for different use cases. Benefits highlighted include performance, scalability, availability, and that they are fully managed. Two customer examples of Duolingo and Capital One migrating to AWS databases are provided. The document concludes by discussing the advantages of moving to managed databases on AWS over self-managed databases.
In addition to running databases in Amazon EC2, AWS customers can choose among a variety of managed database services. These services save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service; Amazon RDS, a relational database service in the cloud; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started. We will also have with us Jeongsang Baek, the VP of Engineering from IGAWorks, Korea’s No.1 mobile business platform, who will walk us through their architecture and share with us the key insights that they gained from using the various AWS database technologies to deliver a reliable, efficient and cost-effective experience.
Power your apps with a secure, scalable and durable back end on Amazon Web Service. Whether you are looking to minimize your operational overhead or to maintain tight control, AWS has a spectrum of database options for you to choose the right architecture for your needs. Learn about your options and how to choose the right architecture for your apps.
Attend this session for a technical deep dive about RDS Postgres and Aurora Postgres. Come hear from Mark Porter, the General Manager of Aurora PostgreSQL and RDS at AWS, as he covers service specific use cases and applications within the AWS worldwide public sector community. Learn More: https://aws.amazon.com/government-education/
This presentation summarizes Amazon Redshift data warehouse service, its architecture and best practices for application development using Amazon Redshift.
Amazon Redshift is a fully managed petabyte-scale data warehouse service in the cloud. It provides fast query performance at a very low cost. Updates since re:Invent 2013 include new features like distributed tables, remote data loading, approximate count distinct, and workload queue memory management. Customers have seen query performance improvements of 20-100x compared to Hive and cost reductions of 50-80%. Amazon Redshift makes it easy to setup, operate, and scale a data warehouse without having to worry about provisioning and managing hardware.
The document discusses several Amazon Web Services (AWS) managed database options. It begins by explaining why companies choose managed database services over self-managed options, noting that AWS handles maintenance, backups, scaling and other tasks. It then summarizes the major AWS managed database services: Amazon Relational Database Service (RDS) for relational databases, Amazon DynamoDB for non-relational databases, Amazon ElastiCache for in-memory caching, and Amazon Redshift for data warehousing. For each service, it provides examples of common use cases and highlights features like automation, scalability, availability and pay-as-you-go pricing.
Amazon Web Services provides a number of database management alternatives for all type of customers. You can run managed relational databases, managed NoSQL databases, a petabyte-scale data warehouse, or you can even operate your own online database in the cloud on Amazon EC2. Discover our database offerings and find what service to use according to your existing needs or how to deliver your next big project. Find out about data migration services, tools and best practices for security, availability and scalability, and hear some of the great database success stories from AWS customers. Speaker: Ari Newman, Account Manager & Rob Carr, Solutions Architect, Amazon Web Services Featured Customer - Atlassian
The document provides an overview of Amazon Web Services (AWS) databases and analytics services. It summarizes that AWS has significantly expanded its database and analytics offerings between 2015-2018, with over 750 new features and 10 new services launched. It highlights several core AWS database and analytics services, including Amazon DynamoDB, Amazon RDS, Amazon Aurora, Amazon Neptune, and Amazon ElastiCache. It also discusses how customers are migrating workloads from on-premises databases to AWS databases and analytics services.
This document summarizes Amazon Web Services database migration and replication services. It discusses how the AWS Database Migration Service can migrate databases between on-premises and cloud environments within 10 minutes with no application downtime. It also describes how the AWS Schema Conversion Tool can help migrate databases off Oracle and SQL Server to other database engines like MySQL. Finally, it provides an overview of Amazon RDS managed database services and high availability features.
We are excited to announce the immediate availability of MariaDB on Amazon RDS. You can now run your MariaDB database on AWS while taking advantage of RDS management features like automated backups, point-in-time recovery, cross-region replication, and multi-AZ deployments for high availability. In this session, you learn about how to leverage RDS to get the most out of your MariaDB database. Steven Grandchamp, Vice President and GM at MariaDB, is a participant in this session.
The cloud is all the rage. Does it live up to its hype? What are the benefits of the cloud? Join me as I discuss the reasons so many companies are moving to the cloud and demo how to get up and running with a VM (IaaS) and a database (PaaS) in Azure. See why the ability to scale easily, the quickness that you can create a VM, and the built-in redundancy are just some of the reasons that moving to the cloud a “no brainer”. And if you have an on-prem datacenter, learn how to get out of the air-conditioning business!
(1) Amazon Redshift is a fully managed data warehousing service in the cloud that makes it simple and cost-effective to analyze large amounts of data across petabytes of structured and semi-structured data. (2) It provides fast query performance by using massively parallel processing and columnar storage techniques. (3) Customers like NTT Docomo, Nasdaq, and Amazon have been able to analyze petabytes of data faster and at a lower cost using Amazon Redshift compared to their previous on-premises solutions.
Learning Objectives: - Learn about the capabilities of the PostgreSQL database - Learn about PostgreSQL offerings on AWS - Learn how to migrate from Oracle to PostgreSQL with minimal disruption
Learning Objectives: - Learn about the capabilities of the PostgreSQL database - Learn about PostgreSQL offerings on AWS - Learn how to migrate from Oracle to PostgreSQL with minimal disruption
The document provides information about migrating databases to AWS using Amazon Relational Database Service (RDS) and AWS Database Migration Service (DMS). It discusses: - Key features of RDS such as provisioning databases quickly with high availability, security, backups and monitoring capabilities. - How DMS allows migrating databases to AWS with minimal downtime by continuously replicating and migrating data between databases. - Examples of customers who have successfully migrated large databases to AWS using RDS and DMS to improve scalability, availability and reduce costs compared to on-premises databases.
This document provides an overview of running enterprise workloads on Amazon Web Services (AWS). It defines what an enterprise application is, examples of applications commonly run by enterprises, and customer case studies of companies running SAP, Oracle, and Microsoft applications on AWS. The document discusses how AWS addresses key enterprise application requirements around security, availability, cost optimization, and performance. It provides architectural best practices and examples for setting up various enterprise applications and workloads on AWS.
AWS ofrece una gran variedad de servicios de base de datos que se adaptan a los requisitos de su aplicación. Los servicios de bases de datos están totalmente administrados y se pueden implementar en cuestión de minutos con tan solo unos clics. Los servicios de AWS incluyen Amazon Relational Database Service (Amazon RDS), compatible con 6 motores de bases de datos comunes, Amazon Aurora, base de datos relacional compatible con MySQL con un desempeño 5 veces superior, Amazon DynamoDB, servicio de bases de datos NoSQL rápido y flexible, Amazon Redshift, almacén de datos a escala de petabytes, y Amazon Elasticache, servicio de caché en memoria compatible con Memcached y Redis. AWS también proporciona AWS Database Migration Service, un servicio que permite migrar las bases de datos a la nube de AWS de forma sencilla y rentable.
In the past year, Amazon RDS has continued to expand functionality, scalability, availability and ease of use for all supported database engines: PostgreSQL, MySQL, MariaDB, Oracle and Microsoft SQL Server. We’ll take a close look at RDS use cases and new capabilities, splitting the time between open-source and commercial database engines.