Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache.
In this session, we show you how to use Amazon Route 53 to consolidate your DNS data and manage it centrally. Learn how to use Amazon Route 53 for public DNS and for private DNS in VPC, and also learn how to combine Amazon Route 53 private DNS with your own DNS infrastructure.
- Kerberos is used to authenticate Hadoop services and clients running on different nodes communicating over a non-secure network. It uses tickets for authentication. - Key configuration changes are required to enable Kerberos authentication in Hadoop including setting hadoop.security.authentication to kerberos and generating keytabs containing principal keys for HDFS services. - Services are associated with Kerberos principles using keytabs which are then configured for use by the relevant Hadoop processes and services.
Heavy users monopolizing cluster resources is a frequent cause of slowdown for others. With only one namenode and thousands of datanodes, any poorly written application is a potential distributed denial-of-service attack on namenode. In this talk, you will learn how to prevent slowdown from heavy users and poorly-written applications by enabling IPC Quality of Service (QoS), a new feature in Hadoop 2.6+. On Twitter’s and eBay’s production clusters, we’ve seen response times of 500 milliseconds with QoS off drop to 10 milliseconds with QoS on during heavy usage. We’ll cover how IPC QoS works and share our experience on how to tune performance.
Powerpoint file(incl. animations!): http://db.tt/oQiXb9lq This is the slides of the presentation "Wordpress optimization" who presented at WordCamp 2013. How to improve your wordpress performance and speed up your website more than 700% faster!
Treasure Data is a data analytics service company that makes heavy use of Ruby in its platform and services. It uses Ruby for components like Fluentd (log collection), Embulk (data loading), scheduling, and its Rails-based API and console. Java and JRuby are also used for components involving Hadoop and Presto processing. The company's architecture includes collectors that ingest data, a PlazmaDB for storage, workers that process jobs on Hadoop and Presto clusters, and schedulers that queue and schedule those jobs using technologies like PerfectSched and PerfectQueue which are written in Ruby. Hive jobs are built programmatically using Ruby to generate configurations and submit the jobs to underlying Hadoop clusters.
This document provides a top ten list of tips for improving PHP and web application performance. They include tweaking realpath cache settings, using offline processing whenever possible, writing efficient SQL queries, not executing queries in loops, caching data, using a content delivery network, and using APC caching with apc.stat set to 0. The tips cover optimizing PHP, database, and infrastructure performance.
Tips and tricks for high performance websites Latest version of my Nginx / PHP slide deck, as presented on my APAC 2016 tour
Find out more about Deep Learning in terms of •AI •Infrastructure •Common neural network architectures and use cases •An introduction to Apache MXNet •Demos •Resources
DISQUS is a comment system that handles high volumes of traffic, with up to 17,000 requests per second and 250 million monthly visitors. They face challenges in unpredictable spikes in traffic and ensuring high availability. Their architecture includes over 100 servers split between web servers, databases, caching, and load balancing. They employ techniques like vertical and horizontal data partitioning, atomic updates, delayed signals, consistent caching, and feature flags to scale their large Django application.
The document discusses developing and deploying applications using hybrid cloud strategies. It provides an overview of different cloud platforms and services that can be used as part of a hybrid cloud approach, including Amazon Web Services, Windows Azure, and Orchestra. It then discusses various architecture patterns for deploying applications in a hybrid way, such as using a single server setup, separating the database onto its own server, using multiple database servers with replication, deploying multiple web servers behind a load balancer, offloading static files, and implementing auto-scaling and caching.
The document discusses HDInsight cluster architecture and configuration. It describes how HDInsight clusters connect to Azure data stores like Azure Blob Storage and Azure Data Lake Store. It also discusses using Azure Data Factory for HDInsight orchestration and monitoring an HDInsight cluster.
Hortonworks provides best practices for system testing Hadoop clusters. It recommends testing across different operating systems, configurations, workloads and hardware to mimic a production environment. The document outlines automating the testing process through continuous integration to test over 15,000 configurations. It provides guidance on test planning, including identifying requirements, selecting hardware and workloads to test upgrades, migrations and changes to security settings.
Scaling Ruby applications and redesigning them to fit the enterprise. This talk will bring together techniques and tips we used to run a largescale enterprise in Ruby.
This document provides an overview of Amazon Route 53 DNS services including: - IPv4 and IPv6 address spaces and how Route 53 resolves domain names to IP addresses using A records. - Common DNS record types like NS, SOA, CNAME and how they work. - Route 53 routing policies for controlling traffic like simple, weighted, latency, failover and geolocation routing. - How alias records can simplify configuration by automatically reflecting changes to referenced resources. - A example of setting up Route 53 with domains, record sets, Elastic Load Balancers and instances across regions.
This document provides tips and tricks for optimizing website performance. It discusses running PHP applications on Nginx instead of Apache to improve request handling efficiency. Specific optimizations covered include using PHP-FPM or HHVM as PHP run modes, caching static assets and database queries, and leveraging Nginx caching features like FastCGI caching and integration with Memcached. Migrating to Nginx from Apache and optimizing the PHP and Nginx configuration can significantly improve a website's performance and ability to handle high traffic loads.
Memcached is a free and open-source distributed memory caching system that can be used to speed up dynamic web applications by reducing database load. It stores objects in memory to return frequently or recently used results very quickly. Common things to cache include query results, objects with heavy calculations, and anything that takes time to generate like database calls, API calls, or page rendering. The memcached client knows all memcached servers and hashes keys to determine which server to store or retrieve each object from. Objects are stored using keys and have a maximum size of 1MB. Commands like get, set, add, delete are used to interact with the cache.
In this popular session, you will learn about the latest features and use cases for Amazon EBS, including best practices, an overview of newly introduced features, and brand-new re:Invent announcements. In particular we will cover the expanded portoflio of volume types, including provisioned IOPS, cold storage, and throughput-optimized. This session will help database admins and application architects understand how to blend performance and cost with applicaitns for big data analytics, data warehousing, and transactional and NoSQL databases.
Peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns of our Memcached and Redis offerings and how customers have used them for in-memory operations and achieved improved latency and throughput for applications. During this session, we review best practices, design patterns, and anti-patterns related to Amazon ElastiCache. We also include a demo where we enable Amazon ElastiCache for a web application and show the resulting performance improvements.
In this session, we provide a peek behind the scenes to learn about Amazon ElastiCache's design and architecture. See common design patterns with our Redis and Memcached offerings and how customers have used them for in-memory operations to reduce latency and improve application throughput. During this session, we review ElastiCache best practices, design patterns, and anti-patterns.
AWS Summit 2014 Melbourne - Breakout 5 Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud. Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
Collecting and processing terabytes of data per day is a challenge for any technology company. As marketers and brands become more sophisticated consumers of data, enabling granular levels of access to targeted subsets of data from outside your firewalls presents new challenges. This session discusses how to build scalable, complex, and cost-effective data processing pipelines using Amazon Kinesis, Amazon EC2 Spot Instances, Amazon EMR, and Amazon Simple Storage Service (S3). Learn how MediaMath revolutionized their data delivery platform with the help of these services to empower product teams, partners, and clients. As a result, a number of innovative products and services are delivered on top of terabytes of online user behavior. MediaMath covers their journey from legacy batch processing and vendor lock-in to a new world where the raw materials to build advanced lookalike models, optimization algorithms, or marketing attribution models are readily available to any engineering team in real time, substantially reducing the time - and cost - of innovation.
AWS Summit 2014 Perth - Breakout 1 Many IT professionals are using Amazon Web Services (AWS) to deploy, scale and manage fully supported Microsoft Windows Server workloads and Windows Server applications such as SharePoint Server, SQL Server, and Microsoft Exchange Server that are fully supported on the AWS Cloud. Attend this session to find out: - How to determine your licensing strategy in the cloud - Modernizing your Windows 2003 Servers applications before End of Support - AWS .net benefits and services and many more
This document summarizes a presentation on taking a DevOps approach to security. Some key points include: DevOps improves security posture through practices like configuration management, automation, and immutable infrastructure. However, security tools have not kept pace with DevOps velocity. The presentation advocates integrating security practices into DevOps workflows, such as through continuous security testing, centralized logging, and managing vulnerabilities through standardized base images. Moving forward, software-defined security can help leverage cloud visibility and automate security responses in real-time.
The document discusses building a mobile application on AWS that is location-centric and connects with the user's mobile device. It describes using AWS services like Elastic Beanstalk, EC2, S3, DynamoDB, SQS, and CloudFront to develop a minimum viable product within 2.5 days that demonstrates key AWS concepts. The core architecture involves using Elastic Beanstalk for application deployment, EC2 and EBS for compute and storage, DynamoDB for session storage, SQS for pushing content, and CloudFront for content delivery. Visual Studio is used to develop and publish the application directly to AWS.
AWS Summit 2014 Melbourne - Breakout 1 Businesses of all sizes are archiving their data to the AWS Cloud in order to reduce costs while taking advantage of highly secure, highly durable, and simple cloud based storage services. With AWS, you pay as you go and you can scale up and down as required. With your data stored in the AWS Cloud, it’s easy to use other Amazon Web Services to take advantage of additional cost savings and benefits. Amazon storage services remove the need for complex and time-consuming capacity planning, ongoing negotiations with multiple hardware and software vendors, specialized training, and maintenance of offsite facilities or transportation of storage media to third party offsite locations. Amazon Web Services now offers a robust set of hybrid storage solutions for customers that currently operate and maintain data centers. Our Next Generation Enterprise Storage strategy has at its heart Amazon S3. This highly scalable, extremely durable storage service combines with a diverse set of Cloud Storage Gateways to provide businesses with a new approach to Enterprise storage. Presenter: Jeff Putt, Business Development Manager, APAC, Amazon Web Services
Sivakanth Mundru presented on Amazon Web Services CloudTrail. CloudTrail continuously records API calls made on AWS services and delivers log files to customers. The number of supported services has grown from 7 to over 30. CloudTrail logs can be used to determine who made a call, when, what action was performed, which resources were involved, and from/to where. It also records client errors, server errors, and authorization failures. Customers can aggregate logs across regions and accounts.
Startups face a range of challenges as they build their MVP, strategize ways to grow their business while keeping tabs on expenses. These can be overcome by having the right tools and support teams. Our customers in their early phases have benefited from using Amazon Cloudfront in scaling their business on demand across various markets and technologies, creating top-notch customer experience, and cutting costs significantly by integrating Amazon CloudFront in their overall architecture. Also listen to Michael Smith Jr., Chief Product Officer, Spuul, the largest Indian online video site who shared their early challenges and how the AWS Cloud helped Spuul to deliver a superior and consistent customer experience along with best practices and tips for startups
Managing a large portfolio of reservations across an ever-changing infrastructure requires a sophisticated and systematic approach. Attendees in this session walk away with a strategy for maximizing Reserved Instance (RI) coverage in their organization, as well as an understanding of specific tools and tactics to put that strategy into action. Sponsored by Cloudability. Topics include: - Reducing cycle times on the RI buying process - Building a RI-friendly architecture - Implementing a buy-measure-learn methodology that adapts to change
This document summarizes a presentation about security on AWS. It discusses that security is a shared responsibility between AWS and customers. AWS provides security capabilities across people and procedures, network security, physical security, and platform security. Customers are responsible for security controls like access management, data handling, and incident response. The presentation emphasizes that customers have visibility, auditability, and control over their environments on AWS to securely manage access, encrypt data, and monitor systems. It provides examples of how AWS services like CloudTrail, IAM, and encryption help customers securely use AWS.
Application requirements have changed dramatically in recent years, requiring millisecond or even microsecond response times and 100 percent uptime. This change has led to a new wave of andquot;reactive applicationsandquot; with architectures that are event-driven, scalable, resilient, and responsive. In this session, we present the blueprint for building reactive applications on AWS. We compare reactive architecture to the classic n-tier architecture and discuss how it is cost-efficient and easy to implement using AWS. Next, we walk through how to design, build, deploy, and run reactive applications in the AWS cloud, delivering highly responsive user experiences with a real-time feel. This architecture uses Amazon EC2 instances to implement server push to broadcast events to application clients; AWS messaging (Amazon SQS/SNS); Amazon SWF to decouple system components; Amazon DynamoDB to minimize contention; and Elastic Load Balancing, Auto Scaling, Availability Zones, Amazon VPC, and Amazon Route 53 to make reactive applications scalable and resilient.
"In the event of a disaster, you need to be able to recover lost data quickly to ensure business continuity. For critical applications, keeping your time to recover and data loss to a minimum as well as optimizing your overall capital expense can be challenging. This session presents AWS features and services along with Disaster Recovery architectures that you can leverage when building highly available and disaster resilient applications. We will provide recommendations on how to improve your Disaster Recovery plan and discuss example scenarios showing how to recover from a disaster. We will also include a real life customer example of a deployment using AWS for High Availability and Disaster Recovery.
The document provides 10 tips for scaling a startup from 0-10M users using AWS services: 1. Learn early and often through iterative development. 2. Focus on building a simple product that works well rather than features. 3. Leverage AWS services to avoid solving problems yourself. 4. Focus less on infrastructure management and more on scaling core services. 5. Use auto-scaling to optimize resource usage. 6. Design for distributed systems and fault tolerance from the start. 7. Analyze data to continuously improve products and user experience. 8. Control costs as your user base grows through reserved instances and analytics. 9. Optimize static content delivery through S3 and
Building a real time data analysis infrastructure is a challenging task that requires experienced engineers. With AWS services, you can do it in a matter of minutes, scale it easily to handle almost unlimited load, and keep it as a low cost infrastructure. This session is an opportunity to see a live demo on building an infrastructure using a combination of Amazon Kinesis, Redshift, DynamoDB, EMR and CloudSearch, to collect, process and share data.
Functional overview of Gartner's in-depth assessment of AWS and Azure and the decision factors that customers can use to decide between them.
Description: This session will feature best practices in the real world for deploying AWS cloud services. You will hear about cloud use cases, governance, security, cloud architecture, optimizing costs, and leveraging appropriate support offerings. The session will provide insight into experience from hundreds of government customers’ AWS adoption and highlight lessons learned along the way.
Learn how to increase the effectiveness of your security operations as you move to the cloud. This session for architects and IT administrators covers considerations for optimizing your incident response, monitoring, and audit response tactics to take advantage of built-in capabilities in AWS. This session provides practical advice you can apply today, pulled from industry research, direct experience helping customers migrate to the cloud, and from the speaker's own hard-earned lessons. Sponsored by Trend Micro.
Across all industries worldwide, HPC is helping innovative users achieve breakthrough results—from leading edge academic research to data-intensive applications, such as weather prediction and large-scale manufacturing in the aerospace and automotive sectors. As HPC-powered simulations continue to grow ever larger and more complex, scientists are looking for cost-effective high performance compute resources that's available when they need it. Access to on-demand infrastructure allows opportunities to experiment and try new speculative models. AWS provides computing infrastructure that allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. Driven by its flexibility and affordability, many HPC and big data workloads are transitioning from on premise entirely onto AWS. But like on-premises HPC, maximizing application of ""HPC cloud"" workloads requires fast and highly scalable storage. Intel® Cloud Edition for Lustre Software has been purpose-built for use with the dynamic computing resources available from Amazon Web Services to provide the fast, massively scalable storage software resources needed to accelerate performance, even on complex workloads.
The document provides guidelines for deploying an L.N.M.P environment on a 64-bit server. It specifies directory locations for source code, installed software, scripts and logs. It also outlines steps to update the system, install and configure MySQL, Nginx, PHP and other packages, including compiling Nginx with specific modules and options, setting Nginx as a service, and enabling syntax highlighting for Nginx configuration files.
В Dev-Pro DevOps-специалисты работают с Terraform в рамках Azure. Команда работает с множеством окружений и ресурсов, среди которых есть AKS (Kubernetes). Сергей поделится опытом успешного написания модулей и провайдеров для Terraform.
This document provides an overview of how to set up and manage a MongoDB sharded cluster. It describes the key components of a sharded cluster including shards, config servers, and mongos query routers. It then provides step-by-step instructions for deploying, upgrading, and troubleshooting a sharded cluster. The document explains how to configure shards, config servers, and mongos processes. It also outlines best practices for upgrading between minor and major versions of MongoDB.
Michael Hennecke, Chief Technologist, HPC Storage and Networking, Lenovo DAOS User Group event, November 2020.
1. The document demonstrates how to use various AWS services like Kinesis, Redshift, Elasticsearch to analyze streaming game log data. 2. It shows setting up an EC2 instance to generate logs, creating a Kinesis stream to ingest the logs, and building Redshift tables to run queries on the logs. 3. The document also explores loading the logs from Kinesis into Elasticsearch for search and linking Kinesis and Redshift with Kinesis Analytics for real-time SQL queries on streams.