Amazon's CloudFront provides a self-served CDN solution without a contract. In this talk we will walk through the steps to set up CloudFront. We will also talk about how we measure page render time and take a look at how using CloudFront affects page render time for Polyvore in different countries. Slides from Polyvore Tech Talk #1
AWS provides you several pricing options that can help you significantly reduce your overall IT cost, including On-Demand Instances, Spot Instances, and Reserved Instances. This session covers high-level architectures and when to use and not to use each of the pricing models for components of those architectures. We walk through several customer examples to illustrate when to use each pricing option. Additionally, we walk through tools that may be useful to determine when to use each pricing model. This session is aimed at technically savvy managers and engineers who need to reduce their cloud spending. Reasons to attend: - Learn about Reserved Instances, On-Demand Instances and Spot Instances. - Discover ways of running more for less in Amazon EC2. - If you are already running a workload in AWS, attend this webinar to learn how to run the same workload at reduced costs.
This is a presentation that I gave at the AWS Meetup in Ann Arbor, Michigan back in January. It recounts some experiences that I had while working on a project with RightBrain Networks that involved moving millions of small files around between S3, Glacier and an NFS NAS volume. A good time was had by all.
This session will begin with an introduction to non-relational (NoSQL) databases and compare them with relational (SQL) databases. We will also explain the fundamentals of Amazon DynamoDB, a fully managed NoSQL database service. Learn the fundamentals of DynamoDB and see the new DynamoDB console first-hand as we discuss common use cases and benefits of this high-performance key-value and JSON document store.
This session dives deep into techniques used by successful customers who optimized their use of AWS. Learn tricks and hear tips you can implement right away to reduce waste, choose the most efficient instance, and fine-tune your spending, often with improved performance and a better end-customer experience. We showcase innovative approaches and demonstrate easily-applicable methods for cost optimizing Amazon EC2, Amazon S3, and a host of other services to save you time and money.
Netflix is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. Netflix is a large, ever changing, ecosystem system serving million of customers across the globe through cloud-based systems and a globally distributed CDN. This entertaining romp through the tech stack serves as an introduction to how we think about and design systems, the Netflix approach to operational challenges, and how other organizations can apply our thought processes and technologies. We’ll talk about: The Bits - The technologies used to run a global streaming company Making the Bits Bigger - Scaling at scale Keeping an Eye Out - Billions of metrics Break all the Things - Chaos in production is key DevOps - How culture affects your velocity and uptime
This session drills deep into the Amazon S3 technical best practices that help you maximize storage performance for your use case. We provide real-world examples and discuss the impact of object naming conventions and parallelism on Amazon S3 performance, and describe the best practices for multipart uploads and byte-range downloads.
Learn how AWS customers save money, time and effort by using AWS's backup and archive services. Organizations of all sizes rely on AWS services to durably safeguard their data off-premises at a surprisingly low cost. This session will illustrate backup and archive architectures that AWS customers are benefitting from today.
The document discusses AWS Snowball, Snowball Edge, and Snowmobile - physical data transport solutions for migrating large amounts of data into AWS. Snowball is designed for petabyte-scale data migration, Snowball Edge provides petabyte-scale hybrid storage and compute capabilities, and Snowmobile is for exabyte-scale data migration using a 45-foot shipping container. The document provides details on their capabilities, use cases, security features, and cost. It also includes examples like how Oregon State University uses Snowball to migrate terabytes of oceanic research data and how DigitalGlobe migrated 100 petabytes of satellite imagery to AWS using Snowmobile.
Amazon EC2 allows you to bid for and run spare EC2 capacity, known as Spot instances, in a dynamically priced market. On average, customers save 80% to 90% compared to On Demand prices by using Spot instances. Achieving these savings has historically required time and effort to find the best deals while managing compute capacity as supply and demand fluctuate. In this session, we dive into how customers who have designed scalable, cloud friendly application architectures can leverage new Spot features to realize immediate cost savings while maintaining availability. Attendees will leave with practical knowledge of how, via well architected applications, they can run production services on the Spot instances just like IFTTT and Mapbox.
In this session, you will learn the key differences between a relational database management service (RDBMS) and non-relational (NoSQL) databases like Amazon DynamoDB. You will learn about suitable and unsuitable use cases for NoSQL databases. You'll learn strategies for migrating from an RDBMS to DynamoDB through a 5-phase, iterative approach. See how Sony migrated an on-premises MySQL database to the cloud with Amazon DynamoDB, and see the results of this migration.
Researchers at Clemson University assigned a student summer intern to explore bioinformatics cloud solutions that leverage MPI, the OrangeFS parallel file system, AWS CloudFormation templates, and a Cluster Scheduler. The result was an AWS cluster that runs bioinformatics code optimized using MPI-IO. We give an overview of the process and show how easy it is to create clusters in AWS.
After we launched Amazon Aurora, a cloud-native relational database with region-wide durability, high availability, fast failover, up to 15 read replicas, and up to five times the performance of MySQL, many of you asked us whether we could deliver the same features - but with PostgreSQL compatibility. We are now delivering a preview of Amazon Aurora with this functionality: we have built a PostgreSQL-compatible edition of Amazon Aurora, sharing the core Amazon Aurora innovations with the object-oriented capabilities, language interfaces, JSON compatibility, ANSI:SQL:2008 compliance, and broad functional richness of PostgreSQL. Amazon Aurora will provide full PostgreSQL compatibility while delivering more than twice the performance of the community PostgreSQL database on many workloads. At this session, we will be discussing the newest addition to Amazon Aurora in detail.
This advanced session targets Amazon Simple Storage Service (Amazon S3) technical users. We will discuss the impact of object naming conventions and parallelism on S3 performance, provide real-world examples and code the implements best practices for naming of objects and implementing parallelism of both PUTs and GETs, cover multi-part uploads and byte-range downloads and introduce GNU parallel for a quick and easy way to improve S3 performance.
Enterprises are quickly moving database workloads like SQL Server to the cloud, but with so many options, the best approach isn’t always obvious. You exercise full control of your SQL Server workloads by running them on Amazon EC2 instances, or leverage Amazon RDS for a fully managed database experience. This session will go deep on best practices and considerations for running SQL Server on AWS. We will cover best practices for deploying SQL Server, how to choose between Amazon EC2 and Amazon RDS, ways to optimize the performance of your SQL Server deployment for different applications types. We review in detail how to provision and monitor your SQL Server databases, and how to manage scalability, performance, availability, security, and backup and recovery, in both Amazon RDS and Amazon EC2.
Accelerated computing is on the rise because of massively parallel, compute-intensive workloads such as deep learning, 3D content rendering, financial computing, and engineering simulations. In this session, we provide an overview of our accelerated computing instances, including how to choose instances based on your application needs, best practices and tips to optimize performance, and specific examples of accelerated computing in real-world applications.
AWS is a great fit for both steady state and episodic computational workloads. Here we present some common architecture patterns for analyzing genomic and other biomedical data on scalable high-throughput computational clusters on AWS. This talk will cover bootstrapping a traditional Beowulf compute cluster on AWS EC2, data transfer and storage strategies for S3.
Amazon Web Services provides startups with the low cost, easy to use infrastructure needed to scale and grow any size business. Attend this session and learn how to migrate your startup to AWS and make the most out of the platform.
My presentation for http://3dcamp.barcamp.ie/ at the University of Limerick on Saturday May 25th 2013
This short document promotes creating presentations using Haiku Deck, a tool for making slideshows. It encourages the reader to get started making their own Haiku Deck presentation and sharing it on SlideShare. In a single sentence, it pitches presentation creation software.
Is it all about healthcare and technology or are other things more important when designing products & solutions for the ageing society
This document outlines a blog post by Marina Gorosito that shares samples and tutorials for using various online tools for ESL teaching. The blog contains 14 posts that provide samples and tutorials for tools like GoAnimate, Glogster, Sketchcast, and Zimmertwins. The final 5 posts propose activities for students to use these tools, including creating videos on issues, making posters on eating disorders, designing monsters in Sketchcast, and crafting cartoons in Zimmertwins.
The document provides an overview of library projects from 2008 featuring Lammhults Library Design products and services. It highlights projects in Ireland, France, Denmark, Kuwait, Norway, and Scotland. For each project, it briefly describes the library, architects involved, and Lammhults products installed, such as various shelving systems. The focus is on showcasing recently completed international library renovations and constructions that utilized Lammhults' expertise and solutions.
This was my Citrix Synergy 2013 presentation on why AppDNA should be a part of every consultant’s toolkit.