In this talk, Marius (@mariusbogoevici) and I (@christianposta) discuss the value of event-driven architectures (both business and technical merits) and how the landscape of integration, streaming, and messaging and now functions/lambdas have evolved to implement EDA while balancing agility, utilization, and simplicity.
Christian Posta is a principal middleware specialist and architect who has worked with large microservices architectures. He discusses why companies are moving to microservices and cloud platforms like Kubernetes and OpenShift. He covers characteristics of microservices like small autonomous teams and decentralized decision making. Posta also discusses breaking applications into independent services, shedding dependencies between teams, and using contracts and APIs for communication between services.
Service-mesh technology promises to deliver a lot of value to a cloud-native application, but it doesn't come without some hype. In this talk, we'll look at what is a "service mesh", how it compares to similar technology (Netflix OSS, API Management, ESBs, etc) and what options for service mesh exist today.
The document discusses microservices and APIs. It covers how microservices optimize for speed by shedding dependencies and having dependencies on demand through services and APIs. It discusses consumer contracts for APIs and service versioning. It also discusses using an API gateway pattern for scalability, security, monitoring and more. It promotes API management for benefits like access control, analytics, and monetization of microservices.
Knative builds on Kubernetes and Istio to provide "PaaS-like abstractions" that raise the level of abstraction for specifying, running, and modifying applications. Knative includes building blocks like Knative Serving for autoscaling container workloads to zero, Knative Eventing for composing event-driven services, Knative Build for building containers from source, and Knative Pipelines for abstracting CI/CD pipelines. While Knative can run any type of container, its building blocks help enable serverless-style functions by allowing compute resources to scale to zero and be driven by event loads.
Topics covered: 1. Generating a new Remix project 2. Conventional files 3. Routes (including the nested variety) 4. Styling 5. Database interactions (via sqlite and prisma) 6. Mutations, Validation, and Authentication 7. Error handling 8. SEO with Meta Tags and much more
Service mesh abstracts the network from developers to solve three main pain points: How do services communicate securely with one another How can services implement network resilience When things go wrong, can we identify what and why Service mesh implementations usually follow a similar architecture: traffic flows through control points between services (usually service proxies deployed as sidecar processes) while an out-of-band set of nodes is responsible for defining the behavior and management of the control points. This loosely breaks out into an architecture of a "data plane" through which requests flow and a "control plane" for managing a service mesh. Different service mesh implementations use different data planes depending on their use cases and familiarity with particular technology. The control plane implementations vary between service-mesh implementations as well. In this talk, we'll take a look at three different control plane implementations with Istio, Linkerd and Consul, their strengths, and their specific tradeoffs to see how they chose to solve each of the three pain points from above. We can use this information to make choices about a service mesh or to inform our journey if we choose to build a control plane ourselves.
Service-mesh technology promises to deliver a lot of value to a cloud-native application, but it doesn't come without some hype. In this talk, we'll look at what is a "service mesh", how it compares to similar technology (Netflix OSS, API Management, ESBs, etc) and what options for service mesh exist today.
Kubernetes users need to allow traffic to flow into and within the cluster. Treating the application traffic separately from the business logic allows presents new possibilities in how service to service traffic is served, controlled and observed — and provides a transition to intra cluster networking like Service Mesh. With microservices, there is a concept of both North / South traffic (incoming requests from end users to the cluster) and East / West (intra cluster) communication between the services. In this talk we will explain how Envoy Proxy works in Kubernetes as a proxy for both of these traffic directions and how it can be leveraged to do things like traffic shaping, security, and integrate the north/south to east/west behavior. Christian Posta (@christianposta) is Global Field CTO at Solo.io, former Chief Architect at Red Hat, and well known in the community for being an author (Istio in Action, Manning, Istio Service Mesh, O'Reilly 2018, Microservices for Java Developers, O’Reilly 2016), frequent blogger, speaker, open-source enthusiast and committer on various open-source projects including Istio, Kubernetes, and many others. Christian has spent time at both enterprises as well as web-scale companies and now helps companies create and deploy large-scale, cloud-native resilient, distributed architectures. He enjoys mentoring, training and leading teams to be successful with distributed systems concepts, microservices, devops, and cloud-native application design.
If you have an existing Java monolith, you know you must take care making changes to it or altering it in any negative way. Often times these monoliths are very valuable to the business and generate a lot of revenue. At the same time, since it’s difficult to make changes to the monolith it’s desirable to move to a microservices architecture. Unfortunately you cannot just do a big-bang migration to a greenfield architecture and will have to incrementally adopt microservices. In this talk, we’ll look at using Gloo proxy which is based on Envoy Proxy and GraphQL to do surgical, function-level traffic control and API aggregation to safely migrate your monolith to microservices and serverless functions.
This document provides an introduction and overview of microservices architecture and patterns, with a focus on using Spring Boot and Spring Cloud to build microservices. It defines key concepts like patterns, anti-patterns, microservices and cloud native applications. It outlines several architectural patterns for microservices like immutable services, service registration and discovery, and service configuration. It also describes the Spring Boot and Spring Cloud frameworks for developing microservices and some of their main projects that support service discovery, routing, configuration, messaging and more.
There are many different approaches to how you let your microservices communicate between one another. Be it asynchronous or synchronous, choreographed or orchestrated, eventual consistent or distributedly transactional, fault tolerant or just a mess! In this session I will provide an overview on different concepts of microservice communication and their pros & cons. On the way I'll try to throw in some anecdotes, success stories and failures I learned from so that you can hopefully take something home with you.
Building applications for cloud-native infrastructure that are resilient, scalable, secure, and meet compliance and IT objectives gets complicated. Another wrinkle for the organizations with which we work is the fact they need to run across a hybrid deployment footprint, not just Kubernetes. At Solo.io, we build application networking technology on Envoy Proxy that helps solve difficult multi-deployment, multi-cluster, and even multi-mesh problems. In this webinar, we’re going to explore different options and patterns for building secure, scalable, resilient applications using technology like Kubernetes and Service Mesh without leaving behind existing IT investments. We’ll see why and when to use multi-cluster topologies, how to build for high availability and team autonomy, and solve for things like service discovery, identity federation, traffic routing, and access control.
Embracing open source software for critical platform operations is a tough organizational evolution for a company of any size. This is particularly daunting for technology teams accustomed to a fully supported managed service. Come learn about how we are using OSS to modernize Health Care at UnitedHealth Group as a roadmap to adopt and offer OSS in your own organization! Over the last three years, Kafka as a Service within UnitedHealth Group has gone from non-existent to being centrally managed and utilized by over 200 internal application teams as an essential component to our ecosystem. In this session, I will share how to tactically implement a Kafka as a Service platform offering within any organization with a very lean team and how to get broad adoption from engineers and leadership. I'll discuss the engineering cultural changes needed, both on the DevOps team as well as more broadly, to adopt OSS. Spoiler: Documentation is the key to success. I will talk about some of our "aha" moments, including the importance of internal Terms of Service and how to encourage teams to "Google first." I will include things that haven't worked as well, such as requiring manual review of all topic creation PRs (this doesn't scale!). Attendees will learn how to both stand up their own OSS offering as well as how to be a good internal consumer of other such offerings. Come ready to learn and laugh about my journey to offering OSS to thousands of people!
The document discusses cloud-native application architectures and how they enable speed, safety, and scale through approaches like twelve-factor applications and microservices. It outlines the cloud-native stack and where governance is needed to secure different components like code, orchestration tools, containers, services, and infrastructure. The document argues that while cloud-native approaches are well-suited for technology companies, traditional enterprises face challenges in fully adopting these architectures due to differences in priorities, skills, and scale.
High productivity platforms enable rapid application development. Refresh your technology platform and adopt new DevOps practices.
The document discusses microservices and the Lagom framework. It provides an overview of what microservices are and the goals they aim to achieve, such as accelerating teams, reducing dependencies, and increasing application throughput. It then outlines several principles for microservices, including isolation, asynchronous APIs, immutable deployments, and exposing simplified APIs. Finally, it describes key aspects of the Lagom framework, which supports building microservices, such as its service and persistence APIs, development environment, and production environment using Lightbend Reactive Platform.
Today’s products - devices, software and services - are well instrumented to permit users, vendors and service providers to gather maximum insight into how they are used, when they need repair and many other operational insights. Ensuring that products can rapidly adapt to a constantly changing environment and changing customer needs requires that the events they generate are analyzed continuously and in context. Insights can be synthesized from many sources in context - geospatial and proximity, trajectory and even predicted future states.Customers, vendors and service providers need to analyze, learn, and predict directly from streaming events because data volumes are huge and automated responses must often be delivered in milliseconds. To achieve insights quickly, we need to build models on-the-fly whose predictions are accurate and in sync with the real world, often to support automation. Many insights depend on analyzing the joint evolution of data sources whose behavior is correlated in time or space.In this talk we present Swim, an Apache 2.0 licensed platform for continuous intelligence applications. Swim builds a fluid model of data sources and their changing relationships in real-time - Swim applications analyze, learn and predict directly from event data. Swim applications integrate with Apache Kafka for event streaming. Developers need nothing more than Java skills. Swim deploys native or in containers on k8s, with the same code in each instance. Instances link to build an application layer mesh that facilitates distribution and massive scale without sacrificing consistency. We will present several continuous intelligence applications in use today that depend on real-time analysis, learning and prediction to power automation and deliver responses that are in sync with the real-world. We will show how easy it is to build, deploy and run distributed, highly available event streaming applications that analyze data from hundreds of millions of sources - petabytes per day. The architecture is intuitively appealing and blazingly fast.
Simon Green's presentation at Red Hat's Microservices, Containers, APIs, and Integration Day in NYC and DC, August 2018
Presentation at Red Hat's "Microservices, API, Integration and Container Day" event, Tustin, CA. June 21, 2018
Presentation at Red Hat's "API, Microservices, Integration and Container" day, Tustin, CA, 6/21/2018.
This document summarizes a talk about moving from a monolithic architecture to microservices. It discusses what microservices are, examples of large companies that adopted microservices like Amazon and Netflix, and the monolithic problems at Lendingkart. It then describes how Lendingkart broke up its monolith into multiple microservices for different functions. Some challenges of microservices like distributed tracing and increased operations overhead are also outlined. Best practices for adopting microservices like incremental adoption and clear interfaces are also provided.
Rodrigo Antonialli presented on microservices architecture. He began by defining microservices as independent services that communicate through lightweight mechanisms like HTTP APIs. Each service focuses on a specific business capability and can be independently deployed. Antonialli then discussed characteristics of microservices like componentization via services, organization around business capabilities, and infrastructure automation. He also covered enabling technologies like containers, messaging systems, and monitoring tools. Finally, Antonialli noted both pros and cons of microservices, such as improved scalability but also increased complexity. He recommended students focus on high-level concepts first and that experienced developers will know how to apply microservices appropriately based on their situation.
When we developing a loosely coupled and reusable application, often arises the question: how to arrange to communicate between services or applications? To a large extent, it depends on the nature of the request and the granularity of your applications or services. We will discuss the two classic microservice integration patterns: service choreography and orchestration. What is the difference between these two modes of communication? Which one we should use? How to ensure data consistency? How to implement disturbed transactions? We will discuss these issues, consider an example of implementing orchestration on nodejs, and of course we will not forget about logging, monitoring and alerting.
With microservices gone mainstream a few years ago, many organizations have now adopted them; even though all are paying the price in terms of training, solution complexity and operational costs, few are reaping the promised benefits. Lower velocity, quality and performance issues, along with an overall lack of visibility are what we hear about most often. In this session, working from our experience as advisors to software development teams, we’ll walk you through some of the symptoms you might experience, their possible causes and some potential solutions.
This document discusses strategies for refactoring monolithic applications into microservices when migrating from a relational database to a NoSQL database. It describes splitting the monolith by fracturing modules into encapsulated services. Alternatively, it proposes strangling the monolith by gradually creating new services around the edges of the existing monolith. When migrating data, it also discusses moving from shared database tables to independent data ownership between services. The document advocates for independent release cycles and a share-nothing architecture between loosely coupled microservices.
Enterprise architectures never sleep because cloud-first strategies must also become multi-cloud-first strategies. Public cloud providers such as Microsoft Azure are providing compelling services and pricing. And, most enterprises now consider their own datacenter a private cloud. This is not a one-cloud playing field and enterprise architects must develop strategies, standards, and policies about how their data is being used, moved, and created across multiple cloud infrastructures. Join Pivotal’s Jag Mirani and Mike Stolz along with guest, Forrester Vice President and Principal Analyst, Mike Gualtieri, as they examine the trends driving multi-cloud adoption and more importantly how to architect technical solutions to make data free to roam among them safely. Speakers: Mike Gualtieri, VP, PRINCIPAL ANALYST, Forrester Jag Mirani, Product Marketing, Data Services, Pivotal Mike Stolz, Product Lead, GemFire, Pivotal
The document discusses microservices and some of the challenges of moving to a microservices architecture. It describes what microservices are and how decomposing an application into loosely coupled services can help deliver changes rapidly and reliably. However, distributing an application in this way introduces challenges around distributed data, consistency, and event-driven messaging. Patterns like sagas, event sourcing and CQRS are discussed as ways to help maintain consistency when data is distributed across multiple microservices.
The nature of containerized, cloud-native applications is rapidly advancing with a fundamentally different architecture that will rely on service meshes with smarter proxies, traffic management, and enhanced observability for cooperating microservices, serverless functions, and complex workflows. In this session we will highlight the features that characterize this architectural transformation in the Docker cloud-native ecosystem.
SpringOne Platform 2017 David Turanski, Pivotal; Rohit Sood, Liberty Mutual; Rohit Kelapure, Pivotal; Justin Stone, Liberty Mutual This session will detail a synthesis of techniques used to destroy a monolithic BPM and orchestration based application at Liberty Mutual into an event driven microservices based architecture implemented with Event Sourcing and CQRS. The transformation and developer productivity affected by the monolith decomposition and alignment of business capabilities to bounded contexts teaches lessons for all enterprises looking to undergo similar changes.
This document provides an introduction to cloud computing, including definitions, characteristics, benefits, and applications. It discusses the National Institute of Standards and Technology's (NIST) definition of cloud computing and reference architecture. The document also covers cloud reference models, design principles for cloud architecture, and key components of the NIST cloud computing reference architecture such as cloud providers, consumers, brokers, auditors, and carriers.