This document discusses how the economics of cloud computing will change how Java applications are developed. Cloud providers charge for computing resources on an hourly basis (e.g. $ per GB per hour), which means applications need to use resources efficiently. Java applications generally use more memory and have longer startup times than other languages. To be cost effective in the cloud, Java applications will need to reduce their memory footprint, decrease startup times, and be designed to fail and recover gracefully. The rise of APIs and microservices also requires changes to make Java more modular and efficient in constrained environments.
Businesses are speeding up development and automating operations to remain competitive and to get large organizations to scale. Project based monolithic application updates are replaced by product teams owning containerized microservices. This puts developers on call, responsible for pushing code to production, fixing it when it breaks, and managing the cost and security aspects of running their microservices. In this world operations skill-sets are either embedded in the microservices development teams, or building and operating API driven platforms. The platform automates stress testing, canary based deployment, penetration testing and enforces availability and security requirements. There are no meetings or tickets to file in the delivery process for updating a containerized microservice, which can happen many times a day, and takes seconds to complete. The role of site reliability engineering moves from firefighting and fixing outages to buiding tools for finding problems and routing those problems to the right developers. SREs manage the incident lifecycle for customer visible problems, and measure and publish availability metrics. This may sound futuristic but Werner Vogels described this as “You build it, you run it” in 2006.
This document discusses increasing developer productivity through serverless computing. It begins by outlining various types of cognitive load on developers and how serverless can help minimize extraneous load. It then discusses how technical debt and inability to evolve can reduce productivity. Serverless is presented as helping reduce technical debt through writing less code and fewer dependencies. The total cost of ownership advantages of serverless are covered, including no infrastructure maintenance, built-in auto-scaling, ability to do more with fewer resources, lower technical debt, and faster time to market. Best practices like evolutionary architecture, DevOps, and chaos engineering are discussed for effectively leveraging serverless. Recent improvements to serverless offerings from AWS are summarized.
Building add-ons for Atlassian products today means building a Connect add-on and running it as a service in your own infrastructure, or a PaaS provider’s infrastructure, or (more commonly) a set of microservices. While this has many benefits, the transition from monolithic to distributed systems brings with it additional failure modes that simply do not manifest in the world of local function calls. Join Atlassian developer Diego Berrueta for a walk-through of 5 resilience techniques that will help keep your services rock-solid in the face of unreliable, slow, or faulty systems. Diego Berrueta, Engineering Principal, Atlassian
The document summarizes Adrian Cockcroft's experience giving talks about Netflix's approach to technology over time. It notes that initially people reacted skeptically, saying Netflix's approach was crazy and wouldn't work (2009-2010). Later, people said it could only work for large companies like Netflix (2011). By 2012, people said they wanted to adopt a similar approach but couldn't. The document outlines key lessons learned from Cockcroft's time at Netflix, including that speed wins in the marketplace and removing friction from product development helps enable faster innovation.
DevOps originated from the Toyota Production System which pioneered lean manufacturing practices like just-in-time production and continuous improvement. These concepts influenced early software development methodologies like agile, Scrum, and extreme programming. As software development aimed to deliver value faster, operations struggled to keep up, highlighting the need for closer collaboration between development and operations teams. In 2008, Patrick Debois coined the term "DevOps" to describe this integration. Since then, DevOps adoption has grown significantly, though its core goals of empowering employees, delivering value, and embracing change remain the same.
Web Services and microservices, the effect on vendor lock-in, and a taxonomy of several kinds of lock-in.
We'll discover the reasons why it is a risky bet to not *aim* to manage infrastructure and its configuration with idempotence and immutability at heart. Sharing real world experience, we'll see why configurations should not be done by humans (it's like playing Djenga), and why what may work at the beginning does not work over a long period of time or scale (pet vs cattle problem).
Do it like the "DevOps Unicorns" Etsy, Facebook and Co: Deploy more frequently. But how and why? Challenges? Deploying Software Faster without Failing Faster is possible through Metrics driven Engineering. Identify problems early on using a "Shift-Left in Quality". This requires a Level-Up of Dev, Test, Ops, Biz See some of the metrics that I think you need to look at and how to upgrade your engineering team to produce better quality right from the start
You might have heard about this DevOps thing, but what's it all about? This talk gives you a fast paced insight into real world horror stories from companies that didn't think DevOps practices mattered, and outlines 8 lessons we've learnt from helping people get DevOps initiatives successfully started.
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
1. ViaSat implemented a DevOps model and tools like Splunk, xMatters, Jira and HipChat to improve incident response times and enable automated collaboration across teams. 2. Use cases described how full closed loop incidents could be managed from initial alert to resolution. CI/CD pipelines allowed for automated deployments and documentation updates. 3. Benefits included reducing response times from 10 minutes to 30 seconds on average, empowering on-call staff to focus on fixing issues rather than administrative tasks, and enabling seamless escalation to ChatOps teams.
There are typically four primary variables that influence the performance of an Atlassian application: users, application admins, add-on developers, and system administrators. Each plays a different role and its impact on performance can be profound at scale. Dan Hardiker, Chief Technical Officer at Adaptavist who's advised Fortune 500 companies on their Atlassian implementations, will share best practices and demonstrate how to use the process of "monitor, measure, mitigate" to identify key performance bottlenecks and provide data that your organization can use to optimize performance. Dan Hardiker, CTO, Adaptavist
Delivered talk on Performance Testing : Cloud Deployments by Shreyas Chaudhari and Manish Hemnani at ThoughtWorks, Pune on 16th March, 2019 in VodQA Pune 2019.
The document discusses cloud native platforms and Pivotal Cloud Foundry. It describes Cloud Foundry as an opinionated and structured platform that defines strict contracts between the infrastructure layer, applications, and services. It also discusses how Cloud Foundry uses BOSH to automate infrastructure provisioning and orchestration, and how applications are deployed as containers through Diego cells.
Combination of Fast Delivery slides with Migrating to Microservices presented at GOTO Berlin in November 2014
Fixing Security by Fixing Software Development Using Continuous Deployment Do you have an effective release cycle? Is your process long and archaic? Long release cycle are typically based on assumptions we haven't seen since the 1980s and require very mature organizations to implement successfully. They can also disenfranchise developers from caring or even knowing about security or operational issues. Attend this session to learn more about an alternative approach to managing deployments through Continuous Deployment, otherwise known as Continuous Delivery. Find out how small, but frequent changes to the production environment can transform an organization’s development process to truly integrate security. Learn how to get started with continuous deployment and what tools and process are needed to make implementation within your organization a (security) success.
Full slide deck for day long discussion of microservices topics. Why use microservices, what options exist and how to migrate to them and address common problems.
The document discusses how the economics of cloud computing will change how Java applications are developed and deployed. Cloud providers charge for computing resources on an hourly basis, incentivizing lighter, more efficient applications. Java applications will need to reduce their memory footprints and startup times to lower costs. Developers will also need to design applications to be resilient to failures and easier to debug remotely without access to instances. The rise of APIs and metering of resources will require Java and the JVM to become leaner and more flexible to run optimally in cloud environments.
How far have you got with learning about Cloud? Got your head around Platform as a Service? Understand what IaaS means? Can spell Docker? Working in a DevOps mode? It’s easy to focus on learning new technology but it’s time to take a step back and look at what the technical implications are when an application is heading to the cloud. In the world of the cloud the benefits are high but the economics (financial and technical) can be radically different. Learn more about these new realities and how they can change application design, deployment and support. The introduction of Cloud technologies and its rapid adoption creates new opportunities and challenges. Whether designer, developer or tester, this talk will help you to start thinking differently about Java and the Cloud. Presented at JAX DE, 2016
The document discusses how the economics of cloud computing will change how Java applications are developed and deployed. Specifically: 1. In cloud computing, customers pay for computing resources like CPU and RAM on an hourly basis, creating a direct link between cost and resource usage. This will drive Java applications to use fewer resources to reduce costs. 2. Java applications will need to have faster startup times, smaller footprints, and be designed to fail and recover quickly to work well in cloud environments. 3. The growth of APIs and sharing data/services means Java developers will need to focus on building reliable, performant, and well-documented APIs to monetize data and services. 4. Significant changes
Presented at JAX London 2013 Per-tenant resource management can help ensure that collocated tenants peacefully share computational resources based on individual quotas. This session begins with a comparison of deployment models (shared: hardware, OS, middleware, everything) to motivate the multitenant approach. The main topic is an exploration of experimental data isolation and resource management primitives in IBM’s JDK that combine to help make multitenant applications smaller and more predictable.
A workshop held in StartIT as part of Catena Media learning sessions. We aim to dispel the notion that large PHP applications tend to be sluggish, resource-intensive and slow compared to what the likes of Python, Erlang or even Node can do. The issue is not with optimising PHP internals - it's the lack of proper introspection tools and getting them into our every day workflow that counts! In this workshop we will talk about our struggles with whipping PHP Applications into shape, as well as work together on some of the more interesting examples of CPU or IO drain.
Everyone is talking about building “cloud native” Java applications—and taking advantage of microservice architecture, containers, and orchestration/PaaS platforms—but there is surprisingly little discussion of migrating existing legacy (moneymaking) applications. This session aims to address this, and, using lessons learned from several real-world examples, it covers topics such when to rewrite applications (if at all), modeling/extracting business domains, applying the “application strangler” pattern, common misconceptions with “12-factor” application design, and the benefits/drawbacks of container technology.
This document discusses building a full-stack application called MemeMail using Golang and Google Cloud Platform within one week. It describes choosing Google Cloud over other cloud providers for its ease of use. It then discusses the frontend implementation using Vue.js with a simple state mutation approach. The backend is built with Golang on App Engine using Cloud services like Datastore and Cloud Build for CI/CD. It emphasizes keeping the architecture simple rather than over-engineering for an MVP within a tight deadline.
This slide contains a brief presentation of how Organizations can leverage Cloud to virtualize functional/performance testing and cost benefit from investing in hardware.
Erik Costlow, Product Evangelist at Contrast Security, was Oracle's principal product manager for Java 8 and 9, focused on security and performance. His security expertise involves threat modeling, code analysis, and instrumentation of security sensors. He is working to broaden this approach to security with Contrast Security. Before becoming involved in technology, Erik was a circus performer who juggled fire on a three-wheel vertical unicycle.
Presented by Erik Costlow, Contrast Security, at DevSecOps 101: Containers, Clouds, and Apps in Boston on May 16th, 2019.
This document discusses how integrating cloud services can help solve technology issues and reduce costs compared to building infrastructure from scratch. It provides 10 ways to utilize the cloud, including content delivery, CMS asset hosting, forms, backups, media streaming, development sandboxes, and encoding/processing large amounts of data. Examples are given of colleges that saved money by using cloud services for video hosting, CMS testing, and project development. Potential cloud providers like Amazon, Rackspace, and Mechanical Turk are also mentioned.
This document provides information about cloud computing and Drupal cloud hosting providers. It discusses traditional hosting limitations like high costs, difficulty maintaining servers, and downtime issues. Cloud computing evolved to address these through virtualization, pay-as-you-go models, and automatic scaling. The document then compares top Drupal cloud providers Acquia, Pantheon, and Platform.sh based on their base cloud provider, uptime SLAs, pricing, support offerings, development environments, and other features. It concludes that while each provider has pros and cons, budget and client requirements should determine the best choice for a given project.
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
WSO2Con EU 2015: Keynote - Cloud Native Apps… from a user point of view Presenter: Alexis Richardson Co-founder and CEO, Weaveworks
The document discusses migrating application architectures to cloud-native designs. It begins by explaining the rise of cloud-native architectures, noting their ability to enable speed of innovation, always-available services, web scale, and mobile-centric experiences. Key motivations for adopting cloud-native architectures include enabling speed, safety, scale, and supporting mobile and client diversity. The document then defines characteristics of cloud-native architectures, highlighting twelve-factor applications and their emphasis on horizontal scaling, loose deployment coupling, and configuration via environment variables.
This document discusses the benefits of migrating to cloud-native application architectures. It provides speed, safety, and scale. Cloud-native architectures allow for rapid provisioning of resources and deployment of code changes. They promote safety through visibility into failures, isolation of failures to individual components, fault tolerance to prevent cascading failures, and automated recovery from failures. This enables developing and releasing code quickly while maintaining system stability.
This document discusses the benefits of migrating to cloud-native application architectures. It provides speed, safety, and scale. Cloud-native architectures allow for rapid provisioning of resources and deployment of code changes. They promote safety through visibility into failures, isolation of failures to individual components, fault tolerance to prevent cascading failures, and automated recovery from failures. This enables developing and releasing code quickly while maintaining system stability.
This document discusses the benefits of migrating to cloud-native application architectures. It provides speed, safety, and scale. Cloud-native architectures allow for rapid provisioning of resources and deployment of code changes. They promote safety through visibility into failures, isolation of failures to individual components, fault tolerance to prevent cascading failures, and automated recovery from failures. This enables developing and releasing code quickly while maintaining system stability.
Blue Shield of California revolutionized its portal environment by implementing IBM's PureApplication System. The new portal needed to be ready by October 1st for open enrollment under the Affordable Care Act. Blue Shield's previous infrastructure was non-converged and standalone, making application provisioning take up to 4 months. The PureApplication System provided pre-defined application patterns that allowed Blue Shield to deploy new environments in hours instead of weeks. This helped Blue Shield prepare for the higher website activity expected from the healthcare exchange.
Maven Central hits 1 Trillion downloads, Cyber bad guys make $6 Trillion, Governments respond and of course AI. What happened this year and what does it mean for 2024? A look at what Sonatype discovered in preparing the 9th State of the Software Supply Chain Report and what it could mean for developers in the future. 2024 is going to be difficult for all of us: find out how, why and just what you need to do next!
The document discusses how AI will transform software development and some of the challenges that come with increased use of AI, such as ensuring appropriate content from AI models and securing data used to train models. It notes cybercrime has a GDP comparable to major countries and there are concerns about the origins of data used in open source AI models. The document advocates for using AI tools but also calls for measures like a software bill of materials to help address security and integrity issues.