(Presented by Stackdriver) Key decisions related to architecture, tools, processes, and even team composition can have a dramatic effect on the human effort required to operate distributed applications on AWS. If you make the wrong decisions on in these areas, you spend your days, nights, weekends, and vacations dealing with issues and noise. If you make the right decisions, you and your team can focus on building customer value, and your time away from work is spent… not working. Stackdriver and Smugmug describe the seven most important practices that world-class operations teams employ to minimize operational overhead, highlighting real-world examples to illustrate the importance of each.
Ever wondered how to manage connections to SQL databases from serverless applications, or how to rate limit and build serverless state machines? This presentation discusses patterns you can use to build the most complex serverless applications!
This talk is an evolution of the one presented at FOSDEM'14, we talk about what are the common practices and methodologies for autoscaling, we also cover some best practices and the global scope of autoscaling inside your infrastructure.
The term “reactive” has lately become a buzzword, with a variety of definitions around the Web. When you hear reactive, what do you think of? Reactive Streams? The Reactive Manifesto? ReactJS? These terms may seem unrelated, but they share a common core concept. Reactive applications and reactive programming result in flexible, concise, performant code and are a superior alternative to the old standard thread-based imperative programming model. The reactive approach has gained popularity recently for one simple reason: we need alternative designs and architectures to meet today’s demands. However, it can be difficult to shift one’s mind to think in reactive terms due to how accustomed we’ve become to the imperative style. Stephen Pember explores the various definitions of reactive and reactive programming with the goal of providing techniques for building efficient, scalable applications. Steve dives into the key concepts of Reactive Streams and examines some sample implementations—including how ThirdChannel is currently using reactive libraries in production code. Steve looks at some of the open source options available in the JVM—including Reactor, RxJava, and Ratpack—giving attendees an idea of where to begin with the reactive ecosystem. If reactive is new to you, this should be an excellent introduction.
This document provides an overview of reactive applications and Reactive Streams. It discusses the need for reactive approaches to address increasing performance demands and microservices. Reactive applications are responsive, resilient, elastic and asynchronous. Reactive Streams provide a common abstraction for data streams and asynchronous data sources using an observer pattern. The document also summarizes several Reactive Streams implementations for the JVM like RxJava and frameworks like Spring WebFlux, Play and Vert.x that support reactive programming.
This document discusses deploying Active Directory on AWS. It notes that while building an Active Directory infrastructure in a company normally takes days, it can be done on AWS in just 40 minutes. It then covers topics like why deploy AD on AWS, how to migrate or extend an existing on-premises AD to AWS, and post-deployment operations like DNS and DHCP configuration to point to the new domain controllers.
The document summarizes the 2nd Annual Startup Launches event hosted by Amazon.com on November 14, 2013. It includes presentations from several startup companies including KoalityCode, CardFlight, Runscope, SportXast, Nitrous.IO, and SPOT101. Each startup pitched their product or service and how it leverages AWS cloud services. Special offers for AWS re:Invent attendees were also announced.
Amazon Elastic Compute Cloud (Amazon EC2) has added a number of instance types that provide a high level of performance. Instances range from compute-optimized instances to instances that deliver thousands of IOPS. In this session, you will learn more about Amazon EC2 high performance instance types and hear from customers about how they are using these instances to improve application performance, and reduce costs.
AWS offers many data services, each optimized for a specific set of structure, size, latency, and concurrency requirements. Making the best use of all specialized services has historically required custom, error-prone data transformation and transport. Now, users can use the AWS Data Pipeline service to orchestrate data flows between Amazon S3, Amazon RDS, Amazon DynamoDB, Amazon Redshift, and on-premise data stores, seamlessly and efficiently applying EC2 instances and EMR clusters to process and transform data. In this session, we demonstrate how you can use AWS Data Pipeline to coordinate your Big Data workflows, applying the optimal data storage technology to each part of your data integration architecture. Swipely's Head of Engineering shows how Swipely uses AWS Data Pipeline to build batch analytics, backfilling all their data, while using resources efficiently. Consequently, Swipely launches novel product features with less development time and less operational complexity.
Mobile apps have different service requirements from their desktop and web-based analogs. Bandwidth, client processing, and other considerations can impose significant extra demands on a scalable service. This session is a technical discussion of the challenges Flipboard met while scaling a data-intensive mobile app from 0 to 100 million clients and how they are working on scaling 10x using AWS. At each major step, Flipboard has encountered many challenges. Learn about how they handled those challenges and the evolution of their systems architecture, design choices, and software selection.
"In this session, learn how Trend Micro built Deep Security as a service on AWS. This service offers enterprise-grade security controls for AWS deployments in the form of intrusion detection and prevention, anti-malware, a firewall, web reputation, and integrity monitoring. With over 400 internal requirements set by their in-house Information Security and IT Operations teams, the Service team was challenged with building the case to deploy Deep Security as a service on AWS instead of in-house. This session walks through the reasons why the team chose AWS, the design decisions they made, and how they were able to meet or exceed their in-house requirements while deploying on AWS."
Ruby developers: attend this session and learn about the next major version of the AWS SDK for Ruby, the aws-core gem. We dive deep into the SDK, covering topics such as waiters, request enumeration and pagination, resource modeling, version locking, and more. Learn how to take advantage of these features as we construct a sample Ruby application using the AWS SDK.
This document discusses how to move applications to the cloud. It begins by defining cloud computing characteristics like self-service, on-demand access, resource pooling, and broad network access. It then contrasts traditional high availability approaches with service resiliency models better suited for cloud environments. The document provides guidance on re-architecting both new "greenfield" applications and existing "legacy" applications for the cloud, emphasizing a focus on services rather than servers and using distributed architectures and data stores.
A talk I did recently on microservices and functional programming. Microservices are small, single purpose apps that are run as a service, which are usually composed together to provide the real app.
SmugMug spent six years split between its datacenters and AWS. Find out how and why SmugMug went 100% AWS, migrating 30 TB of databases, hundreds of frontends, load balancing, and caches, across the US in one night with zero downtime.We show you specific techniques and processes that made our large-scale migration a resounding success: moving massive MySQL databases, testing and sizing a new AWS infrastructure, automating AWS operations, managing the risks involved in wholesale infrastructure change, and architecting for reliability in multiple AWS Availability Zones. We talk about the performance, scalability, operational, and business benefits and challenges we've seen since moving 100% to AWS. Finally, we share secrets about our favorite AWS products.
Whether you're a startup getting to profitability or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. Dive deep into techniques used by successful customers to reduce waste and fine-tune their AWS spending, often with improved performance and a better end-customer experience. Some techniques covered in this session: Learn how to make the most of Auto Scaling, develop an effective Spot Instance strategy, and optimize for your daily traffic cycles. Learn techniques to tier storage, offload your static content to Amazon S3 and Amazon CloudFront, reduce your database loads with edge caching, spawn part-time databases, pool resources across accounts, and even teach your dev/test instances to sleep. Showcasing easily-applicable methods, this session could be your best invested hour all day.
Whether you're a startup getting to profitability or an enterprise optimizing spend, it pays to run cost-efficient architectures on AWS. Dive deep into techniques used by successful customers to reduce waste and fine-tune their AWS spending, often with improved performance and a better end-customer experience. Some techniques covered in this session: Learn how to make the most of Auto Scaling, develop an effective Spot Instance strategy, and optimize for your daily traffic cycles. Learn techniques to tier storage, offload your static content to Amazon S3 and Amazon CloudFront, reduce your database loads with edge caching, spawn part-time databases, pool resources across accounts, and even teach your dev/test instances to sleep. Showcasing easily-applicable methods, this session could be your best invested hour all day.
The document discusses the benefits of using a polyglot approach to application development. It argues that a single technology or programming language is no longer suitable for modern applications given the variety of data types, need for scalability, and rapid pace of change. The document provides examples of when different programming languages may be better suited to different tasks, such as using Node.js for real-time interactions, Scala for data processing, and JavaScript for querying. It advocates choosing the right tool based on factors like maturity, features, learning curve, and productivity.
Implementation of a disaster recovery (DR) site is crucial for the business continuity of any enterprise. Due to the fundamental nature of features like elasticity, scalability and geographic distribution, DR implementation on AWS can be done at 10-50% of the conventional cost. In this session, we do a deep dive into proven DR architectures on AWS and the best practices, tools and techniques to get the most out of them. This session is recommended for attendees who wish to explore options for ensuring the continuity of their business.
This document summarizes a presentation given on July 11, 2013 in London by Rackspace's Unlocked team. The presentation introduced the team members and discussed why unlocked events are held. It then covered topics including the hybrid cloud, how developers are driving innovation, and a case study of how HubSpot uses the hybrid cloud. Key points emphasized that the hybrid cloud gives developers the most power and freedom, and that developers driving innovation is important.
Come learn about the new features we're launching at FutureStack, as well as what our roadmap looks like for the next year. We'll also share how we think about our products and the process is for deciding what to build next.
This document contains the slides from a presentation by Patrick Chanezon on cloud computing. Some key points from the presentation include: - Cloud computing has evolved from consumer websites needing to solve problems with large data sets, storage capacity, and scalability. This led to public cloud services from companies like Amazon and Google. - While infrastructure as a service provides virtualization and scalability, platforms are still needed to build distributed applications. Platform as a service providers aim to make application development easier by providing services and hiding infrastructure details. - Agile development processes are better suited for the fast iteration cycles needed when developing applications for consumer markets with short product lifetimes. Cloud platforms help enable more agile development.
The document discusses considerations for building a private cloud using OpenStack Folsom. It covers topics such as the definition of a private cloud, sizing instances and flavors, network architecture including multiple networks, image storage and performance, and architecture examples for different sizes of private clouds. The document provides guidance on capacity planning, performance bottlenecks, and best practices for building a private cloud with OpenStack.
This document summarizes Viadeo's experience moving their entire infrastructure to AWS. It describes their motivation for moving to the cloud to focus on their product instead of managing hardware. It outlines their step-by-step process, including automating infrastructure with CloudFormation, baking AMIs, and continuous integration/delivery. It discusses lessons learned around networking, databases, and involving stakeholders in the transition. Currently, their staging and production environments run in AWS across multiple regions, with future plans to migrate more services.
Anurag Gupta spoke at the AWS Big Data Meetup in Palo Alto and described the AWS DevOps culture. In the talk he gives pointers on how service owners can setup monitoring that will continually reduce operational burden.